Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1454318 Details for
Bug 1594176
Error response from daemon: No such container: ceph-mon-controller-0
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
ansible.log with ceph-ansible RC9 package installed before OC deploy
ansible.log (text/plain), 6.20 MB, created by
Filip Hubík
on 2018-06-25 10:29:54 UTC
(
hide
)
Description:
ansible.log with ceph-ansible RC9 package installed before OC deploy
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-06-25 10:29:54 UTC
Size:
6.20 MB
patch
obsolete
>2018-06-25 05:55:06,133 p=25239 u=mistral | Using /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ansible.cfg as config file >2018-06-25 05:55:06,760 p=25239 u=mistral | PLAY [Gather facts from undercloud] ******************************************** >2018-06-25 05:55:06,770 p=25239 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-06-25 05:55:07,502 p=25239 u=mistral | ok: [undercloud] >2018-06-25 05:55:07,517 p=25239 u=mistral | PLAY [Gather facts from overcloud] ********************************************* >2018-06-25 05:55:07,525 p=25239 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-06-25 05:55:10,907 p=25239 u=mistral | ok: [compute-0] >2018-06-25 05:55:10,929 p=25239 u=mistral | ok: [controller-0] >2018-06-25 05:55:11,088 p=25239 u=mistral | ok: [ceph-0] >2018-06-25 05:55:11,102 p=25239 u=mistral | PLAY [Load global variables] *************************************************** >2018-06-25 05:55:11,124 p=25239 u=mistral | TASK [include_vars] ************************************************************ >2018-06-25 05:55:11,173 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.16,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.16,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.16,ceph-0.external.localdomain,ceph-0.external,192.168.24.16,ceph-0.management.localdomain,ceph-0.management,192.168.24.16,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.10,compute-0.localdomain,compute-0,172.17.3.16,compute-0.storage.localdomain,compute-0.storage,192.168.24.13,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.10,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.12,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.13,compute-0.external.localdomain,compute-0.external,192.168.24.13,compute-0.management.localdomain,compute-0.management,192.168.24.13,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.12,controller-0.localdomain,controller-0,172.17.3.10,controller-0.storage.localdomain,controller-0.storage,172.17.4.15,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.12,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.16,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.106,controller-0.external.localdomain,controller-0.external,192.168.24.14,controller-0.management.localdomain,controller-0.management,192.168.24.14,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/global_vars.yaml"], "changed": false} >2018-06-25 05:55:11,198 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.16,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.16,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.16,ceph-0.external.localdomain,ceph-0.external,192.168.24.16,ceph-0.management.localdomain,ceph-0.management,192.168.24.16,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.10,compute-0.localdomain,compute-0,172.17.3.16,compute-0.storage.localdomain,compute-0.storage,192.168.24.13,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.10,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.12,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.13,compute-0.external.localdomain,compute-0.external,192.168.24.13,compute-0.management.localdomain,compute-0.management,192.168.24.13,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.12,controller-0.localdomain,controller-0,172.17.3.10,controller-0.storage.localdomain,controller-0.storage,172.17.4.15,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.12,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.16,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.106,controller-0.external.localdomain,controller-0.external,192.168.24.14,controller-0.management.localdomain,controller-0.management,192.168.24.14,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/global_vars.yaml"], "changed": false} >2018-06-25 05:55:11,199 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.16,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.16,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.16,ceph-0.external.localdomain,ceph-0.external,192.168.24.16,ceph-0.management.localdomain,ceph-0.management,192.168.24.16,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.10,compute-0.localdomain,compute-0,172.17.3.16,compute-0.storage.localdomain,compute-0.storage,192.168.24.13,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.10,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.12,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.13,compute-0.external.localdomain,compute-0.external,192.168.24.13,compute-0.management.localdomain,compute-0.management,192.168.24.13,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.12,controller-0.localdomain,controller-0,172.17.3.10,controller-0.storage.localdomain,controller-0.storage,172.17.4.15,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.12,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.16,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.106,controller-0.external.localdomain,controller-0.external,192.168.24.14,controller-0.management.localdomain,controller-0.management,192.168.24.14,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/global_vars.yaml"], "changed": false} >2018-06-25 05:55:11,231 p=25239 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.16,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.16,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.16,ceph-0.external.localdomain,ceph-0.external,192.168.24.16,ceph-0.management.localdomain,ceph-0.management,192.168.24.16,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.10,compute-0.localdomain,compute-0,172.17.3.16,compute-0.storage.localdomain,compute-0.storage,192.168.24.13,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.10,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.12,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.13,compute-0.external.localdomain,compute-0.external,192.168.24.13,compute-0.management.localdomain,compute-0.management,192.168.24.13,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.12,controller-0.localdomain,controller-0,172.17.3.10,controller-0.storage.localdomain,controller-0.storage,172.17.4.15,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.12,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.16,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.106,controller-0.external.localdomain,controller-0.external,192.168.24.14,controller-0.management.localdomain,controller-0.management,192.168.24.14,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/global_vars.yaml"], "changed": false} >2018-06-25 05:55:11,238 p=25239 u=mistral | PLAY [Common roles for TripleO servers] **************************************** >2018-06-25 05:55:11,258 p=25239 u=mistral | TASK [tripleo-bootstrap : Deploy required packages to bootstrap TripleO] ******* >2018-06-25 05:55:12,143 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-25 05:55:12,172 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-25 05:55:12,209 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-25 05:55:12,228 p=25239 u=mistral | TASK [tripleo-bootstrap : Create /var/lib/heat-config/tripleo-config-download directory for deployment data] *** >2018-06-25 05:55:12,718 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:55:12,723 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:55:12,741 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:55:12,762 p=25239 u=mistral | TASK [tripleo-ssh-known-hosts : Template /etc/ssh/ssh_known_hosts] ************* >2018-06-25 05:55:13,784 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "117bc8e6e69e26cb9a45ce2336c03d5538674ff6", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "0561a8e7d2193f80d8f22e3d5e974f90", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1908, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920512.83-94296673447681/source", "state": "file", "uid": 0} >2018-06-25 05:55:13,789 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "117bc8e6e69e26cb9a45ce2336c03d5538674ff6", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "0561a8e7d2193f80d8f22e3d5e974f90", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1908, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920512.86-239249207178047/source", "state": "file", "uid": 0} >2018-06-25 05:55:13,816 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "117bc8e6e69e26cb9a45ce2336c03d5538674ff6", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "0561a8e7d2193f80d8f22e3d5e974f90", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1908, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920512.8-194907356451057/source", "state": "file", "uid": 0} >2018-06-25 05:55:13,823 p=25239 u=mistral | PLAY [Overcloud deploy step tasks for step 0] ********************************** >2018-06-25 05:55:13,849 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:55:13,878 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:13,904 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:13,915 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:13,937 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:55:13,967 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:13,990 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,002 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,025 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:55:14,052 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,077 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,089 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,112 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:55:14,140 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,168 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,179 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,202 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:55:14,232 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,256 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,272 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:14,278 p=25239 u=mistral | PLAY [Server deployments] ****************************************************** >2018-06-25 05:55:14,302 p=25239 u=mistral | TASK [include] ***************************************************************** >2018-06-25 05:55:14,537 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Controller/deployments.yaml for controller-0 >2018-06-25 05:55:14,546 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Controller/deployments.yaml for controller-0 >2018-06-25 05:55:14,554 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Controller/deployments.yaml for controller-0 >2018-06-25 05:55:14,564 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Controller/deployments.yaml for controller-0 >2018-06-25 05:55:14,573 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Controller/deployments.yaml for controller-0 >2018-06-25 05:55:14,582 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Controller/deployments.yaml for controller-0 >2018-06-25 05:55:14,591 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Controller/deployments.yaml for controller-0 >2018-06-25 05:55:14,600 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Controller/deployments.yaml for controller-0 >2018-06-25 05:55:14,625 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:55:14,689 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "b8caea11-6bbd-4280-9e4f-7f23681328c2"}, "changed": false} >2018-06-25 05:55:14,714 p=25239 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-25 05:55:15,382 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "62ba2d98e590d53811db6c7351afffc2dd95b55c", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-b8caea11-6bbd-4280-9e4f-7f23681328c2", "gid": 0, "group": "root", "md5sum": "88de7b9b94cf8a3821fd729f0f3d5432", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920514.78-252437918308476/source", "state": "file", "uid": 0} >2018-06-25 05:55:15,407 p=25239 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-25 05:55:15,752 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:55:15,777 p=25239 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-25 05:55:15,796 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:15,822 p=25239 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-25 05:55:15,838 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:15,865 p=25239 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-25 05:55:15,880 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:15,906 p=25239 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-25 05:55:45,401 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.notify.json)", "delta": "0:00:28.946441", "end": "2018-06-25 05:55:45.467900", "rc": 0, "start": "2018-06-25 05:55:16.521459", "stderr": "[2018-06-25 05:55:16,544] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.json\n[2018-06-25 05:55:45,038] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.14/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.14/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/25 05:55:17 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/25 05:55:17 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/25 05:55:17 AM] [INFO] Not using any mapping file.\\n[2018/06/25 05:55:17 AM] [INFO] Finding active nics\\n[2018/06/25 05:55:17 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/25 05:55:17 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/25 05:55:17 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/25 05:55:17 AM] [INFO] lo is not an active nic\\n[2018/06/25 05:55:17 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/25 05:55:17 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/25 05:55:17 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/25 05:55:17 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/25 05:55:17 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth0\\n[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth1\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan20\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan30\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan40\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan50\\n[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-ex\\n[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth2\\n[2018/06/25 05:55:17 AM] [INFO] applying network configs...\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/25 05:55:17 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/25 05:55:18 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth2\\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth1\\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth0\\n[2018/06/25 05:55:27 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/25 05:55:31 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:55:35 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:55:39 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/25 05:55:43 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-25 05:55:45,039] (heat-config) [DEBUG] [2018-06-25 05:55:16,565] (heat-config) [INFO] interface_name=nic1\n[2018-06-25 05:55:16,565] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be\n[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-zfmkvj446xuu-0-jnx2f6uhtioa-NetworkDeployment-gakjnlwx3upd-TripleOSoftwareDeployment-kz534teopa7c/81c58065-f93c-410b-9bb7-06801369db18\n[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:55:16,566] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:55:16,566] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b8caea11-6bbd-4280-9e4f-7f23681328c2\n[2018-06-25 05:55:45,034] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-25 05:55:45,034] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.14/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.14/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/25 05:55:17 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/25 05:55:17 AM] [INFO] Ifcfg net config provider created.\n[2018/06/25 05:55:17 AM] [INFO] Not using any mapping file.\n[2018/06/25 05:55:17 AM] [INFO] Finding active nics\n[2018/06/25 05:55:17 AM] [INFO] eth2 is an embedded active nic\n[2018/06/25 05:55:17 AM] [INFO] eth1 is an embedded active nic\n[2018/06/25 05:55:17 AM] [INFO] eth0 is an embedded active nic\n[2018/06/25 05:55:17 AM] [INFO] lo is not an active nic\n[2018/06/25 05:55:17 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/25 05:55:17 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/25 05:55:17 AM] [INFO] nic3 mapped to: eth2\n[2018/06/25 05:55:17 AM] [INFO] nic2 mapped to: eth1\n[2018/06/25 05:55:17 AM] [INFO] nic1 mapped to: eth0\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth0\n[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: eth0\n[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-isolated\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth1\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan20\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan30\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan40\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan50\n[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-ex\n[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: br-ex\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth2\n[2018/06/25 05:55:17 AM] [INFO] applying network configs...\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth2\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth1\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth0\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-ex\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/25 05:55:17 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/25 05:55:18 AM] [INFO] running ifup on bridge: br-ex\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth2\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth1\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth0\n[2018/06/25 05:55:27 AM] [INFO] running ifup on interface: vlan50\n[2018/06/25 05:55:31 AM] [INFO] running ifup on interface: vlan20\n[2018/06/25 05:55:35 AM] [INFO] running ifup on interface: vlan30\n[2018/06/25 05:55:39 AM] [INFO] running ifup on interface: vlan40\n[2018/06/25 05:55:43 AM] [INFO] running ifup on interface: vlan20\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan30\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan40\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-25 05:55:45,034] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b8caea11-6bbd-4280-9e4f-7f23681328c2\n\n[2018-06-25 05:55:45,039] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:55:45,040] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.json < /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.notify.json\n[2018-06-25 05:55:45,460] (heat-config) [INFO] \n[2018-06-25 05:55:45,460] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:55:16,544] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.json", "[2018-06-25 05:55:45,038] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.14/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.14/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/25 05:55:17 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/25 05:55:17 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/25 05:55:17 AM] [INFO] Not using any mapping file.\\n[2018/06/25 05:55:17 AM] [INFO] Finding active nics\\n[2018/06/25 05:55:17 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/25 05:55:17 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/25 05:55:17 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/25 05:55:17 AM] [INFO] lo is not an active nic\\n[2018/06/25 05:55:17 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/25 05:55:17 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/25 05:55:17 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/25 05:55:17 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/25 05:55:17 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth0\\n[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth1\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan20\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan30\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan40\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan50\\n[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-ex\\n[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth2\\n[2018/06/25 05:55:17 AM] [INFO] applying network configs...\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/25 05:55:17 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/25 05:55:18 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth2\\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth1\\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth0\\n[2018/06/25 05:55:27 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/25 05:55:31 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:55:35 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:55:39 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/25 05:55:43 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-25 05:55:45,039] (heat-config) [DEBUG] [2018-06-25 05:55:16,565] (heat-config) [INFO] interface_name=nic1", "[2018-06-25 05:55:16,565] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", "[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-zfmkvj446xuu-0-jnx2f6uhtioa-NetworkDeployment-gakjnlwx3upd-TripleOSoftwareDeployment-kz534teopa7c/81c58065-f93c-410b-9bb7-06801369db18", "[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:55:16,566] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:55:16,566] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b8caea11-6bbd-4280-9e4f-7f23681328c2", "[2018-06-25 05:55:45,034] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-25 05:55:45,034] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.14/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.14/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/25 05:55:17 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/25 05:55:17 AM] [INFO] Ifcfg net config provider created.", "[2018/06/25 05:55:17 AM] [INFO] Not using any mapping file.", "[2018/06/25 05:55:17 AM] [INFO] Finding active nics", "[2018/06/25 05:55:17 AM] [INFO] eth2 is an embedded active nic", "[2018/06/25 05:55:17 AM] [INFO] eth1 is an embedded active nic", "[2018/06/25 05:55:17 AM] [INFO] eth0 is an embedded active nic", "[2018/06/25 05:55:17 AM] [INFO] lo is not an active nic", "[2018/06/25 05:55:17 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/25 05:55:17 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/25 05:55:17 AM] [INFO] nic3 mapped to: eth2", "[2018/06/25 05:55:17 AM] [INFO] nic2 mapped to: eth1", "[2018/06/25 05:55:17 AM] [INFO] nic1 mapped to: eth0", "[2018/06/25 05:55:17 AM] [INFO] adding interface: eth0", "[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: eth0", "[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-isolated", "[2018/06/25 05:55:17 AM] [INFO] adding interface: eth1", "[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan20", "[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan30", "[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan40", "[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan50", "[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-ex", "[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: br-ex", "[2018/06/25 05:55:17 AM] [INFO] adding interface: eth2", "[2018/06/25 05:55:17 AM] [INFO] applying network configs...", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth2", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth1", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth0", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-ex", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/25 05:55:17 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/25 05:55:18 AM] [INFO] running ifup on bridge: br-ex", "[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth2", "[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth1", "[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth0", "[2018/06/25 05:55:27 AM] [INFO] running ifup on interface: vlan50", "[2018/06/25 05:55:31 AM] [INFO] running ifup on interface: vlan20", "[2018/06/25 05:55:35 AM] [INFO] running ifup on interface: vlan30", "[2018/06/25 05:55:39 AM] [INFO] running ifup on interface: vlan40", "[2018/06/25 05:55:43 AM] [INFO] running ifup on interface: vlan20", "[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan30", "[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan40", "[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-25 05:55:45,034] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b8caea11-6bbd-4280-9e4f-7f23681328c2", "", "[2018-06-25 05:55:45,039] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:55:45,040] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.json < /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.notify.json", "[2018-06-25 05:55:45,460] (heat-config) [INFO] ", "[2018-06-25 05:55:45,460] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:55:45,429 p=25239 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-25 05:55:45,483 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:55:16,544] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.json", > "[2018-06-25 05:55:45,038] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.14/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.14/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.106/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/25 05:55:17 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/25 05:55:17 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/25 05:55:17 AM] [INFO] Not using any mapping file.\\n[2018/06/25 05:55:17 AM] [INFO] Finding active nics\\n[2018/06/25 05:55:17 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/25 05:55:17 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/25 05:55:17 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/25 05:55:17 AM] [INFO] lo is not an active nic\\n[2018/06/25 05:55:17 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/25 05:55:17 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/25 05:55:17 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/25 05:55:17 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/25 05:55:17 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth0\\n[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth1\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan20\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan30\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan40\\n[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan50\\n[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-ex\\n[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/25 05:55:17 AM] [INFO] adding interface: eth2\\n[2018/06/25 05:55:17 AM] [INFO] applying network configs...\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/25 05:55:17 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/25 05:55:18 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth2\\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth1\\n[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth0\\n[2018/06/25 05:55:27 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/25 05:55:31 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:55:35 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:55:39 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/25 05:55:43 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-25 05:55:45,039] (heat-config) [DEBUG] [2018-06-25 05:55:16,565] (heat-config) [INFO] interface_name=nic1", > "[2018-06-25 05:55:16,565] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", > "[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-zfmkvj446xuu-0-jnx2f6uhtioa-NetworkDeployment-gakjnlwx3upd-TripleOSoftwareDeployment-kz534teopa7c/81c58065-f93c-410b-9bb7-06801369db18", > "[2018-06-25 05:55:16,565] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:55:16,566] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:55:16,566] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b8caea11-6bbd-4280-9e4f-7f23681328c2", > "[2018-06-25 05:55:45,034] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-25 05:55:45,034] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.14/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.14/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.106/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/25 05:55:17 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/25 05:55:17 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/25 05:55:17 AM] [INFO] Not using any mapping file.", > "[2018/06/25 05:55:17 AM] [INFO] Finding active nics", > "[2018/06/25 05:55:17 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/25 05:55:17 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/25 05:55:17 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/25 05:55:17 AM] [INFO] lo is not an active nic", > "[2018/06/25 05:55:17 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/25 05:55:17 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/25 05:55:17 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/25 05:55:17 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/25 05:55:17 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/25 05:55:17 AM] [INFO] adding interface: eth0", > "[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-isolated", > "[2018/06/25 05:55:17 AM] [INFO] adding interface: eth1", > "[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan20", > "[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan30", > "[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan40", > "[2018/06/25 05:55:17 AM] [INFO] adding vlan: vlan50", > "[2018/06/25 05:55:17 AM] [INFO] adding bridge: br-ex", > "[2018/06/25 05:55:17 AM] [INFO] adding custom route for interface: br-ex", > "[2018/06/25 05:55:17 AM] [INFO] adding interface: eth2", > "[2018/06/25 05:55:17 AM] [INFO] applying network configs...", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth2", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/25 05:55:17 AM] [INFO] running ifdown on bridge: br-ex", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/25 05:55:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/25 05:55:17 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/25 05:55:18 AM] [INFO] running ifup on bridge: br-ex", > "[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth2", > "[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth1", > "[2018/06/25 05:55:22 AM] [INFO] running ifup on interface: eth0", > "[2018/06/25 05:55:27 AM] [INFO] running ifup on interface: vlan50", > "[2018/06/25 05:55:31 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/25 05:55:35 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/25 05:55:39 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/25 05:55:43 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/25 05:55:44 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-25 05:55:45,034] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b8caea11-6bbd-4280-9e4f-7f23681328c2", > "", > "[2018-06-25 05:55:45,039] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:55:45,040] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.json < /var/lib/heat-config/deployed/b8caea11-6bbd-4280-9e4f-7f23681328c2.notify.json", > "[2018-06-25 05:55:45,460] (heat-config) [INFO] ", > "[2018-06-25 05:55:45,460] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:55:45,510 p=25239 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-25 05:55:45,526 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:45,547 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:55:45,599 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "2b65c052-8c90-416b-a696-1f89225d88a4"}, "changed": false} >2018-06-25 05:55:45,621 p=25239 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment] ************** >2018-06-25 05:55:46,310 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "97ab3f1ad5cccfe559a1a6ff37259ea9261b7e31", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerUpgradeInitDeployment-2b65c052-8c90-416b-a696-1f89225d88a4", "gid": 0, "group": "root", "md5sum": "4e59e5379234962067c99b7ff2763d60", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1183, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920545.67-268083749285208/source", "state": "file", "uid": 0} >2018-06-25 05:55:46,336 p=25239 u=mistral | TASK [Check if deployed file exists for ControllerUpgradeInitDeployment] ******* >2018-06-25 05:55:46,694 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:55:46,716 p=25239 u=mistral | TASK [Check previous deployment rc for ControllerUpgradeInitDeployment] ******** >2018-06-25 05:55:46,734 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:46,756 p=25239 u=mistral | TASK [Remove deployed file for ControllerUpgradeInitDeployment when previous deployment failed] *** >2018-06-25 05:55:46,775 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:46,798 p=25239 u=mistral | TASK [Force remove deployed file for ControllerUpgradeInitDeployment] ********** >2018-06-25 05:55:46,815 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:46,837 p=25239 u=mistral | TASK [Run deployment ControllerUpgradeInitDeployment] ************************** >2018-06-25 05:55:47,719 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.notify.json)", "delta": "0:00:00.510443", "end": "2018-06-25 05:55:47.806913", "rc": 0, "start": "2018-06-25 05:55:47.296470", "stderr": "[2018-06-25 05:55:47,323] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.json\n[2018-06-25 05:55:47,355] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:55:47,355] (heat-config) [DEBUG] [2018-06-25 05:55:47,346] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be\n[2018-06-25 05:55:47,346] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:55:47,346] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-zfmkvj446xuu-0-jnx2f6uhtioa-ControllerUpgradeInitDeployment-h77tvhtbminu/2903c72b-0d4e-418c-bd2a-223072a19ca5\n[2018-06-25 05:55:47,347] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:55:47,347] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:55:47,347] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/2b65c052-8c90-416b-a696-1f89225d88a4\n[2018-06-25 05:55:47,351] (heat-config) [INFO] \n[2018-06-25 05:55:47,351] (heat-config) [DEBUG] \n[2018-06-25 05:55:47,351] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/2b65c052-8c90-416b-a696-1f89225d88a4\n\n[2018-06-25 05:55:47,355] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:55:47,355] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.json < /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.notify.json\n[2018-06-25 05:55:47,799] (heat-config) [INFO] \n[2018-06-25 05:55:47,800] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:55:47,323] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.json", "[2018-06-25 05:55:47,355] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:55:47,355] (heat-config) [DEBUG] [2018-06-25 05:55:47,346] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", "[2018-06-25 05:55:47,346] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:55:47,346] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-zfmkvj446xuu-0-jnx2f6uhtioa-ControllerUpgradeInitDeployment-h77tvhtbminu/2903c72b-0d4e-418c-bd2a-223072a19ca5", "[2018-06-25 05:55:47,347] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:55:47,347] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:55:47,347] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/2b65c052-8c90-416b-a696-1f89225d88a4", "[2018-06-25 05:55:47,351] (heat-config) [INFO] ", "[2018-06-25 05:55:47,351] (heat-config) [DEBUG] ", "[2018-06-25 05:55:47,351] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/2b65c052-8c90-416b-a696-1f89225d88a4", "", "[2018-06-25 05:55:47,355] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:55:47,355] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.json < /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.notify.json", "[2018-06-25 05:55:47,799] (heat-config) [INFO] ", "[2018-06-25 05:55:47,800] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:55:47,743 p=25239 u=mistral | TASK [Output for ControllerUpgradeInitDeployment] ****************************** >2018-06-25 05:55:47,789 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:55:47,323] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.json", > "[2018-06-25 05:55:47,355] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:55:47,355] (heat-config) [DEBUG] [2018-06-25 05:55:47,346] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", > "[2018-06-25 05:55:47,346] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:55:47,346] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-zfmkvj446xuu-0-jnx2f6uhtioa-ControllerUpgradeInitDeployment-h77tvhtbminu/2903c72b-0d4e-418c-bd2a-223072a19ca5", > "[2018-06-25 05:55:47,347] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:55:47,347] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:55:47,347] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/2b65c052-8c90-416b-a696-1f89225d88a4", > "[2018-06-25 05:55:47,351] (heat-config) [INFO] ", > "[2018-06-25 05:55:47,351] (heat-config) [DEBUG] ", > "[2018-06-25 05:55:47,351] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/2b65c052-8c90-416b-a696-1f89225d88a4", > "", > "[2018-06-25 05:55:47,355] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:55:47,355] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.json < /var/lib/heat-config/deployed/2b65c052-8c90-416b-a696-1f89225d88a4.notify.json", > "[2018-06-25 05:55:47,799] (heat-config) [INFO] ", > "[2018-06-25 05:55:47,800] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:55:47,814 p=25239 u=mistral | TASK [Check-mode for Run deployment ControllerUpgradeInitDeployment] *********** >2018-06-25 05:55:47,829 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:47,851 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:55:48,216 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96"}, "changed": false} >2018-06-25 05:55:48,238 p=25239 u=mistral | TASK [Render deployment file for ControllerDeployment] ************************* >2018-06-25 05:55:49,296 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "89c7b8db7eba1ad6398f497c252fc50bc349971b", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerDeployment-ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96", "gid": 0, "group": "root", "md5sum": "8775635f05d4682fa2b7ad41a8d78fae", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 73462, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920548.64-79178806379812/source", "state": "file", "uid": 0} >2018-06-25 05:55:49,321 p=25239 u=mistral | TASK [Check if deployed file exists for ControllerDeployment] ****************** >2018-06-25 05:55:49,688 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:55:49,715 p=25239 u=mistral | TASK [Check previous deployment rc for ControllerDeployment] ******************* >2018-06-25 05:55:49,733 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:49,759 p=25239 u=mistral | TASK [Remove deployed file for ControllerDeployment when previous deployment failed] *** >2018-06-25 05:55:49,777 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:49,801 p=25239 u=mistral | TASK [Force remove deployed file for ControllerDeployment] ********************* >2018-06-25 05:55:49,819 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:49,843 p=25239 u=mistral | TASK [Run deployment ControllerDeployment] ************************************* >2018-06-25 05:55:50,911 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.notify.json)", "delta": "0:00:00.635100", "end": "2018-06-25 05:55:50.995332", "rc": 0, "start": "2018-06-25 05:55:50.360232", "stderr": "[2018-06-25 05:55:50,389] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.json\n[2018-06-25 05:55:50,524] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:55:50,524] (heat-config) [DEBUG] \n[2018-06-25 05:55:50,524] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-25 05:55:50,524] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.json < /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.notify.json\n[2018-06-25 05:55:50,987] (heat-config) [INFO] \n[2018-06-25 05:55:50,987] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:55:50,389] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.json", "[2018-06-25 05:55:50,524] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:55:50,524] (heat-config) [DEBUG] ", "[2018-06-25 05:55:50,524] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-25 05:55:50,524] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.json < /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.notify.json", "[2018-06-25 05:55:50,987] (heat-config) [INFO] ", "[2018-06-25 05:55:50,987] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:55:50,933 p=25239 u=mistral | TASK [Output for ControllerDeployment] ***************************************** >2018-06-25 05:55:50,985 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:55:50,389] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.json", > "[2018-06-25 05:55:50,524] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:55:50,524] (heat-config) [DEBUG] ", > "[2018-06-25 05:55:50,524] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-25 05:55:50,524] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.json < /var/lib/heat-config/deployed/ab5aa0c5-34cd-44d3-8ca9-8958efb2ee96.notify.json", > "[2018-06-25 05:55:50,987] (heat-config) [INFO] ", > "[2018-06-25 05:55:50,987] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:55:51,010 p=25239 u=mistral | TASK [Check-mode for Run deployment ControllerDeployment] ********************** >2018-06-25 05:55:51,024 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:51,047 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:55:51,145 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "98e9a8c0-ee1b-4fde-9f73-76df90a406d9"}, "changed": false} >2018-06-25 05:55:51,171 p=25239 u=mistral | TASK [Render deployment file for ControllerHostsDeployment] ******************** >2018-06-25 05:55:51,892 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "73e8e5d080941e5d4b213eac0e0ee8f8d0fb0b37", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostsDeployment-98e9a8c0-ee1b-4fde-9f73-76df90a406d9", "gid": 0, "group": "root", "md5sum": "b49b0f7b56d95a6216eec75e8238cf01", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4087, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920551.27-167990377493896/source", "state": "file", "uid": 0} >2018-06-25 05:55:51,914 p=25239 u=mistral | TASK [Check if deployed file exists for ControllerHostsDeployment] ************* >2018-06-25 05:55:52,321 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:55:52,346 p=25239 u=mistral | TASK [Check previous deployment rc for ControllerHostsDeployment] ************** >2018-06-25 05:55:52,365 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:52,388 p=25239 u=mistral | TASK [Remove deployed file for ControllerHostsDeployment when previous deployment failed] *** >2018-06-25 05:55:52,406 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:52,428 p=25239 u=mistral | TASK [Force remove deployed file for ControllerHostsDeployment] **************** >2018-06-25 05:55:52,446 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:52,469 p=25239 u=mistral | TASK [Run deployment ControllerHostsDeployment] ******************************** >2018-06-25 05:55:53,425 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.notify.json)", "delta": "0:00:00.519720", "end": "2018-06-25 05:55:53.488960", "rc": 0, "start": "2018-06-25 05:55:52.969240", "stderr": "[2018-06-25 05:55:52,995] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.json\n[2018-06-25 05:55:53,037] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-25 05:55:53,037] (heat-config) [DEBUG] [2018-06-25 05:55:53,019] (heat-config) [INFO] hosts=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be\n[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-tlp5bcmf5wyy-0-pbxnde2n7sox/5ded0f99-a66e-4854-a715-d69fe50df729\n[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:55:53,020] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:55:53,020] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/98e9a8c0-ee1b-4fde-9f73-76df90a406d9\n[2018-06-25 05:55:53,033] (heat-config) [INFO] \n[2018-06-25 05:55:53,034] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-25 05:55:53,034] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/98e9a8c0-ee1b-4fde-9f73-76df90a406d9\n\n[2018-06-25 05:55:53,038] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:55:53,038] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.json < /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.notify.json\n[2018-06-25 05:55:53,482] (heat-config) [INFO] \n[2018-06-25 05:55:53,482] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:55:52,995] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.json", "[2018-06-25 05:55:53,037] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-25 05:55:53,037] (heat-config) [DEBUG] [2018-06-25 05:55:53,019] (heat-config) [INFO] hosts=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", "[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-tlp5bcmf5wyy-0-pbxnde2n7sox/5ded0f99-a66e-4854-a715-d69fe50df729", "[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:55:53,020] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:55:53,020] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/98e9a8c0-ee1b-4fde-9f73-76df90a406d9", "[2018-06-25 05:55:53,033] (heat-config) [INFO] ", "[2018-06-25 05:55:53,034] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-25 05:55:53,034] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/98e9a8c0-ee1b-4fde-9f73-76df90a406d9", "", "[2018-06-25 05:55:53,038] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:55:53,038] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.json < /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.notify.json", "[2018-06-25 05:55:53,482] (heat-config) [INFO] ", "[2018-06-25 05:55:53,482] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:55:53,456 p=25239 u=mistral | TASK [Output for ControllerHostsDeployment] ************************************ >2018-06-25 05:55:53,534 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:55:52,995] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.json", > "[2018-06-25 05:55:53,037] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-25 05:55:53,037] (heat-config) [DEBUG] [2018-06-25 05:55:53,019] (heat-config) [INFO] hosts=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", > "[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-tlp5bcmf5wyy-0-pbxnde2n7sox/5ded0f99-a66e-4854-a715-d69fe50df729", > "[2018-06-25 05:55:53,019] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:55:53,020] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:55:53,020] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/98e9a8c0-ee1b-4fde-9f73-76df90a406d9", > "[2018-06-25 05:55:53,033] (heat-config) [INFO] ", > "[2018-06-25 05:55:53,034] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-25 05:55:53,034] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/98e9a8c0-ee1b-4fde-9f73-76df90a406d9", > "", > "[2018-06-25 05:55:53,038] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:55:53,038] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.json < /var/lib/heat-config/deployed/98e9a8c0-ee1b-4fde-9f73-76df90a406d9.notify.json", > "[2018-06-25 05:55:53,482] (heat-config) [INFO] ", > "[2018-06-25 05:55:53,482] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:55:53,565 p=25239 u=mistral | TASK [Check-mode for Run deployment ControllerHostsDeployment] ***************** >2018-06-25 05:55:53,582 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:53,604 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:55:53,736 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "6f8c2ce7-5c84-4f3c-9459-3df998c3f021"}, "changed": false} >2018-06-25 05:55:53,758 p=25239 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment] ***************** >2018-06-25 05:55:54,495 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "bd117a31aa4fd753a84401bf61dd132045583f6b", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesDeployment-6f8c2ce7-5c84-4f3c-9459-3df998c3f021", "gid": 0, "group": "root", "md5sum": "887d93ace75c947fd38cbde41da797e5", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19036, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920553.9-195789585750765/source", "state": "file", "uid": 0} >2018-06-25 05:55:54,520 p=25239 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesDeployment] ********** >2018-06-25 05:55:54,865 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:55:54,889 p=25239 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesDeployment] *********** >2018-06-25 05:55:54,907 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:54,932 p=25239 u=mistral | TASK [Remove deployed file for ControllerAllNodesDeployment when previous deployment failed] *** >2018-06-25 05:55:54,951 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:54,976 p=25239 u=mistral | TASK [Force remove deployed file for ControllerAllNodesDeployment] ************* >2018-06-25 05:55:54,994 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:55,019 p=25239 u=mistral | TASK [Run deployment ControllerAllNodesDeployment] ***************************** >2018-06-25 05:55:55,936 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.notify.json)", "delta": "0:00:00.570142", "end": "2018-06-25 05:55:56.027708", "rc": 0, "start": "2018-06-25 05:55:55.457566", "stderr": "[2018-06-25 05:55:55,483] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.json\n[2018-06-25 05:55:55,599] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:55:55,599] (heat-config) [DEBUG] \n[2018-06-25 05:55:55,599] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-25 05:55:55,600] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.json < /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.notify.json\n[2018-06-25 05:55:56,021] (heat-config) [INFO] \n[2018-06-25 05:55:56,021] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:55:55,483] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.json", "[2018-06-25 05:55:55,599] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:55:55,599] (heat-config) [DEBUG] ", "[2018-06-25 05:55:55,599] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-25 05:55:55,600] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.json < /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.notify.json", "[2018-06-25 05:55:56,021] (heat-config) [INFO] ", "[2018-06-25 05:55:56,021] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:55:55,960 p=25239 u=mistral | TASK [Output for ControllerAllNodesDeployment] ********************************* >2018-06-25 05:55:56,012 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:55:55,483] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.json", > "[2018-06-25 05:55:55,599] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:55:55,599] (heat-config) [DEBUG] ", > "[2018-06-25 05:55:55,599] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-25 05:55:55,600] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.json < /var/lib/heat-config/deployed/6f8c2ce7-5c84-4f3c-9459-3df998c3f021.notify.json", > "[2018-06-25 05:55:56,021] (heat-config) [INFO] ", > "[2018-06-25 05:55:56,021] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:55:56,037 p=25239 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesDeployment] ************** >2018-06-25 05:55:56,053 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:56,075 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:55:56,132 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "1fe0426b-6260-47a7-a3a5-5e48f3e72e62"}, "changed": false} >2018-06-25 05:55:56,154 p=25239 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment] ******* >2018-06-25 05:55:56,805 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "3822b8a5acbffc4e930cf7445af7305e955e9be7", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesValidationDeployment-1fe0426b-6260-47a7-a3a5-5e48f3e72e62", "gid": 0, "group": "root", "md5sum": "e9ea427f9b607113cd0558ef5a928064", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4941, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920556.21-275081272938275/source", "state": "file", "uid": 0} >2018-06-25 05:55:56,829 p=25239 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesValidationDeployment] *** >2018-06-25 05:55:57,182 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:55:57,206 p=25239 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesValidationDeployment] *** >2018-06-25 05:55:57,227 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:57,254 p=25239 u=mistral | TASK [Remove deployed file for ControllerAllNodesValidationDeployment when previous deployment failed] *** >2018-06-25 05:55:57,273 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:57,298 p=25239 u=mistral | TASK [Force remove deployed file for ControllerAllNodesValidationDeployment] *** >2018-06-25 05:55:57,314 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:57,338 p=25239 u=mistral | TASK [Run deployment ControllerAllNodesValidationDeployment] ******************* >2018-06-25 05:55:58,890 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.notify.json)", "delta": "0:00:01.199275", "end": "2018-06-25 05:55:58.977036", "rc": 0, "start": "2018-06-25 05:55:57.777761", "stderr": "[2018-06-25 05:55:57,802] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.json\n[2018-06-25 05:55:58,562] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.12 for local network 172.17.1.0/24.\\nPing to 172.17.1.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.16 for local network 172.17.2.0/24.\\nPing to 172.17.2.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\\nPing to 172.17.3.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.15 for local network 172.17.4.0/24.\\nPing to 172.17.4.15 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\\nPing to 192.168.24.14 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:55:58,562] (heat-config) [DEBUG] [2018-06-25 05:55:57,823] (heat-config) [INFO] ping_test_ips=172.17.3.10 172.17.4.15 172.17.1.12 172.17.2.16 10.0.0.106 192.168.24.14\n[2018-06-25 05:55:57,823] (heat-config) [INFO] validate_fqdn=False\n[2018-06-25 05:55:57,824] (heat-config) [INFO] validate_ntp=True\n[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be\n[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-3hngnixt4lcf-0-ftabreti5twy/c9f3f716-c55b-4df1-a7f4-be80d111bb82\n[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:55:57,824] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1fe0426b-6260-47a7-a3a5-5e48f3e72e62\n[2018-06-25 05:55:58,558] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\nPing to 10.0.0.106 succeeded.\nSUCCESS\nTrying to ping 172.17.1.12 for local network 172.17.1.0/24.\nPing to 172.17.1.12 succeeded.\nSUCCESS\nTrying to ping 172.17.2.16 for local network 172.17.2.0/24.\nPing to 172.17.2.16 succeeded.\nSUCCESS\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\nPing to 172.17.3.10 succeeded.\nSUCCESS\nTrying to ping 172.17.4.15 for local network 172.17.4.0/24.\nPing to 172.17.4.15 succeeded.\nSUCCESS\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\nPing to 192.168.24.14 succeeded.\nSUCCESS\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-06-25 05:55:58,558] (heat-config) [DEBUG] \n[2018-06-25 05:55:58,558] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1fe0426b-6260-47a7-a3a5-5e48f3e72e62\n\n[2018-06-25 05:55:58,562] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:55:58,563] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.json < /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.notify.json\n[2018-06-25 05:55:58,970] (heat-config) [INFO] \n[2018-06-25 05:55:58,970] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:55:57,802] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.json", "[2018-06-25 05:55:58,562] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.12 for local network 172.17.1.0/24.\\nPing to 172.17.1.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.16 for local network 172.17.2.0/24.\\nPing to 172.17.2.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\\nPing to 172.17.3.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.15 for local network 172.17.4.0/24.\\nPing to 172.17.4.15 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\\nPing to 192.168.24.14 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:55:58,562] (heat-config) [DEBUG] [2018-06-25 05:55:57,823] (heat-config) [INFO] ping_test_ips=172.17.3.10 172.17.4.15 172.17.1.12 172.17.2.16 10.0.0.106 192.168.24.14", "[2018-06-25 05:55:57,823] (heat-config) [INFO] validate_fqdn=False", "[2018-06-25 05:55:57,824] (heat-config) [INFO] validate_ntp=True", "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-3hngnixt4lcf-0-ftabreti5twy/c9f3f716-c55b-4df1-a7f4-be80d111bb82", "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:55:57,824] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1fe0426b-6260-47a7-a3a5-5e48f3e72e62", "[2018-06-25 05:55:58,558] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.", "Ping to 10.0.0.106 succeeded.", "SUCCESS", "Trying to ping 172.17.1.12 for local network 172.17.1.0/24.", "Ping to 172.17.1.12 succeeded.", "SUCCESS", "Trying to ping 172.17.2.16 for local network 172.17.2.0/24.", "Ping to 172.17.2.16 succeeded.", "SUCCESS", "Trying to ping 172.17.3.10 for local network 172.17.3.0/24.", "Ping to 172.17.3.10 succeeded.", "SUCCESS", "Trying to ping 172.17.4.15 for local network 172.17.4.0/24.", "Ping to 172.17.4.15 succeeded.", "SUCCESS", "Trying to ping 192.168.24.14 for local network 192.168.24.0/24.", "Ping to 192.168.24.14 succeeded.", "SUCCESS", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-06-25 05:55:58,558] (heat-config) [DEBUG] ", "[2018-06-25 05:55:58,558] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1fe0426b-6260-47a7-a3a5-5e48f3e72e62", "", "[2018-06-25 05:55:58,562] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:55:58,563] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.json < /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.notify.json", "[2018-06-25 05:55:58,970] (heat-config) [INFO] ", "[2018-06-25 05:55:58,970] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:55:58,914 p=25239 u=mistral | TASK [Output for ControllerAllNodesValidationDeployment] *********************** >2018-06-25 05:55:58,960 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:55:57,802] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.json", > "[2018-06-25 05:55:58,562] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.12 for local network 172.17.1.0/24.\\nPing to 172.17.1.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.16 for local network 172.17.2.0/24.\\nPing to 172.17.2.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\\nPing to 172.17.3.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.15 for local network 172.17.4.0/24.\\nPing to 172.17.4.15 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\\nPing to 192.168.24.14 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:55:58,562] (heat-config) [DEBUG] [2018-06-25 05:55:57,823] (heat-config) [INFO] ping_test_ips=172.17.3.10 172.17.4.15 172.17.1.12 172.17.2.16 10.0.0.106 192.168.24.14", > "[2018-06-25 05:55:57,823] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-25 05:55:57,824] (heat-config) [INFO] validate_ntp=True", > "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", > "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-3hngnixt4lcf-0-ftabreti5twy/c9f3f716-c55b-4df1-a7f4-be80d111bb82", > "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:55:57,824] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:55:57,824] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1fe0426b-6260-47a7-a3a5-5e48f3e72e62", > "[2018-06-25 05:55:58,558] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.", > "Ping to 10.0.0.106 succeeded.", > "SUCCESS", > "Trying to ping 172.17.1.12 for local network 172.17.1.0/24.", > "Ping to 172.17.1.12 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.16 for local network 172.17.2.0/24.", > "Ping to 172.17.2.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.10 for local network 172.17.3.0/24.", > "Ping to 172.17.3.10 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.15 for local network 172.17.4.0/24.", > "Ping to 172.17.4.15 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.14 for local network 192.168.24.0/24.", > "Ping to 192.168.24.14 succeeded.", > "SUCCESS", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-06-25 05:55:58,558] (heat-config) [DEBUG] ", > "[2018-06-25 05:55:58,558] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1fe0426b-6260-47a7-a3a5-5e48f3e72e62", > "", > "[2018-06-25 05:55:58,562] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:55:58,563] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.json < /var/lib/heat-config/deployed/1fe0426b-6260-47a7-a3a5-5e48f3e72e62.notify.json", > "[2018-06-25 05:55:58,970] (heat-config) [INFO] ", > "[2018-06-25 05:55:58,970] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:55:58,982 p=25239 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesValidationDeployment] **** >2018-06-25 05:55:58,996 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:55:59,017 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:55:59,109 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0"}, "changed": false} >2018-06-25 05:55:59,134 p=25239 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment] ***************** >2018-06-25 05:55:59,835 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "f55be52adb95d9aba02c14d195c5326fef3360d8", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostPrepDeployment-1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0", "gid": 0, "group": "root", "md5sum": "f9cc314c0bee5631c6090a042aa5552f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 45397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920559.23-88465689806426/source", "state": "file", "uid": 0} >2018-06-25 05:55:59,857 p=25239 u=mistral | TASK [Check if deployed file exists for ControllerHostPrepDeployment] ********** >2018-06-25 05:56:00,211 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:56:00,235 p=25239 u=mistral | TASK [Check previous deployment rc for ControllerHostPrepDeployment] *********** >2018-06-25 05:56:00,252 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:00,275 p=25239 u=mistral | TASK [Remove deployed file for ControllerHostPrepDeployment when previous deployment failed] *** >2018-06-25 05:56:00,292 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:00,316 p=25239 u=mistral | TASK [Force remove deployed file for ControllerHostPrepDeployment] ************* >2018-06-25 05:56:00,331 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:00,355 p=25239 u=mistral | TASK [Run deployment ControllerHostPrepDeployment] ***************************** >2018-06-25 05:56:23,926 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.notify.json)", "delta": "0:00:23.200187", "end": "2018-06-25 05:56:23.993494", "rc": 0, "start": "2018-06-25 05:56:00.793307", "stderr": "[2018-06-25 05:56:00,818] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.json\n[2018-06-25 05:56:23,581] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:56:23,581] (heat-config) [DEBUG] [2018-06-25 05:56:00,842] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0_variables.json\n[2018-06-25 05:56:23,573] (heat-config) [INFO] Return code 0\n[2018-06-25 05:56:23,574] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/aodh)\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\n\nTASK [aodh logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/cinder)\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\n\nTASK [cinder logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/cinder)\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\nok: [localhost] => (item=/var/lib/cinder)\n\nTASK [cinder_enable_iscsi_backend fact] ****************************************\nok: [localhost]\n\nTASK [cinder create LVM volume group dd] ***************************************\nskipping: [localhost]\n\nTASK [cinder create LVM volume group] ******************************************\nskipping: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/glance)\n\nTASK [glance logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}\n...ignoring\n\nTASK [set_fact] ****************************************************************\nskipping: [localhost]\n\nTASK [file] ********************************************************************\nskipping: [localhost]\n\nTASK [stat] ********************************************************************\nskipping: [localhost]\n\nTASK [copy] ********************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \n\nTASK [mount] *******************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \n\nTASK [Mount Node Staging Location] *********************************************\nskipping: [localhost]\n\nTASK [Mount NFS on host] *******************************************************\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\n\nTASK [gnocchi logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [get parameters] **********************************************************\nok: [localhost]\n\nTASK [get DeployedSSLCertificatePath attributes] *******************************\nskipping: [localhost]\n\nTASK [Assign bootstrap node] ***************************************************\nskipping: [localhost]\n\nTASK [set is_bootstrap_node fact] **********************************************\nskipping: [localhost]\n\nTASK [get haproxy status] ******************************************************\nskipping: [localhost]\n\nTASK [get pacemaker status] ****************************************************\nskipping: [localhost]\n\nTASK [get docker status] *******************************************************\nskipping: [localhost]\n\nTASK [get container_id] ********************************************************\nskipping: [localhost]\n\nTASK [get pcs resource name for haproxy container] *****************************\nskipping: [localhost]\n\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\nskipping: [localhost]\n\nTASK [push certificate content] ************************************************\nskipping: [localhost]\n\nTASK [set certificate ownership] ***********************************************\nskipping: [localhost]\n\nTASK [reload haproxy if enabled] ***********************************************\nskipping: [localhost]\n\nTASK [restart pacemaker resource for haproxy] **********************************\nskipping: [localhost]\n\nTASK [set kolla_dir fact] ******************************************************\nskipping: [localhost]\n\nTASK [set certificate group on host via container] *****************************\nskipping: [localhost]\n\nTASK [copy certificate from kolla directory to final location] *****************\nskipping: [localhost]\n\nTASK [send restart order to haproxy container] *********************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/haproxy)\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\n\nTASK [heat logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/horizon)\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\n\nTASK [horizon logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/keystone)\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\n\nTASK [keystone logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [memcached logs readme] ***************************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/log/containers/mysql)\nok: [localhost] => (item=/var/lib/mysql)\n\nTASK [mysql logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [create /var/lib/neutron] *************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/panko)\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\n\nTASK [panko logs readme] *******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/rabbitmq)\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\n\nTASK [rabbitmq logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}\n...ignoring\n\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/redis)\nchanged: [localhost] => (item=/var/log/containers/redis)\nok: [localhost] => (item=/var/run/redis)\n\nTASK [redis logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create /var/lib/sahara] **************************************************\nchanged: [localhost]\n\nTASK [create persistent sahara logs directory] *********************************\nchanged: [localhost]\n\nTASK [sahara logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/srv/node)\nchanged: [localhost] => (item=/var/log/swift)\n\nTASK [Create swift logging symlink] ********************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/srv/node)\nok: [localhost] => (item=/var/log/swift)\nok: [localhost] => (item=/var/log/containers)\n\nTASK [Set swift_use_local_disks fact] ******************************************\nok: [localhost]\n\nTASK [Create Swift d1 directory if needed] *************************************\nchanged: [localhost]\n\nTASK [swift logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [Format SwiftRawDisks] ****************************************************\n\nTASK [Mount devices defined in SwiftRawDisks] **********************************\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \n\n\n[2018-06-25 05:56:23,574] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0_playbook.yaml\n\n[2018-06-25 05:56:23,581] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-25 05:56:23,583] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.json < /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.notify.json\n[2018-06-25 05:56:23,985] (heat-config) [INFO] \n[2018-06-25 05:56:23,986] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:56:00,818] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.json", "[2018-06-25 05:56:23,581] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:56:23,581] (heat-config) [DEBUG] [2018-06-25 05:56:00,842] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0_variables.json", "[2018-06-25 05:56:23,573] (heat-config) [INFO] Return code 0", "[2018-06-25 05:56:23,574] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/aodh)", "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", "", "TASK [aodh logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/cinder)", "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", "", "TASK [cinder logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/cinder)", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "ok: [localhost] => (item=/var/lib/cinder)", "", "TASK [cinder_enable_iscsi_backend fact] ****************************************", "ok: [localhost]", "", "TASK [cinder create LVM volume group dd] ***************************************", "skipping: [localhost]", "", "TASK [cinder create LVM volume group] ******************************************", "skipping: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/glance)", "", "TASK [glance logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", "...ignoring", "", "TASK [set_fact] ****************************************************************", "skipping: [localhost]", "", "TASK [file] ********************************************************************", "skipping: [localhost]", "", "TASK [stat] ********************************************************************", "skipping: [localhost]", "", "TASK [copy] ********************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", "", "TASK [mount] *******************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", "", "TASK [Mount Node Staging Location] *********************************************", "skipping: [localhost]", "", "TASK [Mount NFS on host] *******************************************************", "skipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) ", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/gnocchi)", "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", "", "TASK [gnocchi logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [get parameters] **********************************************************", "ok: [localhost]", "", "TASK [get DeployedSSLCertificatePath attributes] *******************************", "skipping: [localhost]", "", "TASK [Assign bootstrap node] ***************************************************", "skipping: [localhost]", "", "TASK [set is_bootstrap_node fact] **********************************************", "skipping: [localhost]", "", "TASK [get haproxy status] ******************************************************", "skipping: [localhost]", "", "TASK [get pacemaker status] ****************************************************", "skipping: [localhost]", "", "TASK [get docker status] *******************************************************", "skipping: [localhost]", "", "TASK [get container_id] ********************************************************", "skipping: [localhost]", "", "TASK [get pcs resource name for haproxy container] *****************************", "skipping: [localhost]", "", "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", "skipping: [localhost]", "", "TASK [push certificate content] ************************************************", "skipping: [localhost]", "", "TASK [set certificate ownership] ***********************************************", "skipping: [localhost]", "", "TASK [reload haproxy if enabled] ***********************************************", "skipping: [localhost]", "", "TASK [restart pacemaker resource for haproxy] **********************************", "skipping: [localhost]", "", "TASK [set kolla_dir fact] ******************************************************", "skipping: [localhost]", "", "TASK [set certificate group on host via container] *****************************", "skipping: [localhost]", "", "TASK [copy certificate from kolla directory to final location] *****************", "skipping: [localhost]", "", "TASK [send restart order to haproxy container] *********************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/haproxy)", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", "", "TASK [heat logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/horizon)", "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", "", "TASK [horizon logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/keystone)", "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", "", "TASK [keystone logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [memcached logs readme] ***************************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/log/containers/mysql)", "ok: [localhost] => (item=/var/lib/mysql)", "", "TASK [mysql logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [create /var/lib/neutron] *************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/panko)", "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", "", "TASK [panko logs readme] *******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/rabbitmq)", "changed: [localhost] => (item=/var/log/containers/rabbitmq)", "", "TASK [rabbitmq logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", "...ignoring", "", "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/redis)", "changed: [localhost] => (item=/var/log/containers/redis)", "ok: [localhost] => (item=/var/run/redis)", "", "TASK [redis logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create /var/lib/sahara] **************************************************", "changed: [localhost]", "", "TASK [create persistent sahara logs directory] *********************************", "changed: [localhost]", "", "TASK [sahara logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/srv/node)", "changed: [localhost] => (item=/var/log/swift)", "", "TASK [Create swift logging symlink] ********************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/srv/node)", "ok: [localhost] => (item=/var/log/swift)", "ok: [localhost] => (item=/var/log/containers)", "", "TASK [Set swift_use_local_disks fact] ******************************************", "ok: [localhost]", "", "TASK [Create Swift d1 directory if needed] *************************************", "changed: [localhost]", "", "TASK [swift logs readme] *******************************************************", "changed: [localhost]", "", "TASK [Format SwiftRawDisks] ****************************************************", "", "TASK [Mount devices defined in SwiftRawDisks] **********************************", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=60 changed=33 unreachable=0 failed=0 ", "", "", "[2018-06-25 05:56:23,574] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0_playbook.yaml", "", "[2018-06-25 05:56:23,581] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-25 05:56:23,583] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.json < /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.notify.json", "[2018-06-25 05:56:23,985] (heat-config) [INFO] ", "[2018-06-25 05:56:23,986] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:56:23,955 p=25239 u=mistral | TASK [Output for ControllerHostPrepDeployment] ********************************* >2018-06-25 05:56:24,019 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:56:00,818] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.json", > "[2018-06-25 05:56:23,581] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:56:23,581] (heat-config) [DEBUG] [2018-06-25 05:56:00,842] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0_variables.json", > "[2018-06-25 05:56:23,573] (heat-config) [INFO] Return code 0", > "[2018-06-25 05:56:23,574] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/aodh)", > "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", > "", > "TASK [aodh logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/cinder)", > "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", > "", > "TASK [cinder logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/cinder)", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "ok: [localhost] => (item=/var/lib/cinder)", > "", > "TASK [cinder_enable_iscsi_backend fact] ****************************************", > "ok: [localhost]", > "", > "TASK [cinder create LVM volume group dd] ***************************************", > "skipping: [localhost]", > "", > "TASK [cinder create LVM volume group] ******************************************", > "skipping: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/glance)", > "", > "TASK [glance logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", > "...ignoring", > "", > "TASK [set_fact] ****************************************************************", > "skipping: [localhost]", > "", > "TASK [file] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [stat] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [copy] ********************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", > "", > "TASK [mount] *******************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", > "", > "TASK [Mount Node Staging Location] *********************************************", > "skipping: [localhost]", > "", > "TASK [Mount NFS on host] *******************************************************", > "skipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) ", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/gnocchi)", > "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", > "", > "TASK [gnocchi logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [get parameters] **********************************************************", > "ok: [localhost]", > "", > "TASK [get DeployedSSLCertificatePath attributes] *******************************", > "skipping: [localhost]", > "", > "TASK [Assign bootstrap node] ***************************************************", > "skipping: [localhost]", > "", > "TASK [set is_bootstrap_node fact] **********************************************", > "skipping: [localhost]", > "", > "TASK [get haproxy status] ******************************************************", > "skipping: [localhost]", > "", > "TASK [get pacemaker status] ****************************************************", > "skipping: [localhost]", > "", > "TASK [get docker status] *******************************************************", > "skipping: [localhost]", > "", > "TASK [get container_id] ********************************************************", > "skipping: [localhost]", > "", > "TASK [get pcs resource name for haproxy container] *****************************", > "skipping: [localhost]", > "", > "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", > "skipping: [localhost]", > "", > "TASK [push certificate content] ************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate ownership] ***********************************************", > "skipping: [localhost]", > "", > "TASK [reload haproxy if enabled] ***********************************************", > "skipping: [localhost]", > "", > "TASK [restart pacemaker resource for haproxy] **********************************", > "skipping: [localhost]", > "", > "TASK [set kolla_dir fact] ******************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate group on host via container] *****************************", > "skipping: [localhost]", > "", > "TASK [copy certificate from kolla directory to final location] *****************", > "skipping: [localhost]", > "", > "TASK [send restart order to haproxy container] *********************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/haproxy)", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", > "", > "TASK [heat logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/horizon)", > "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", > "", > "TASK [horizon logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/keystone)", > "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", > "", > "TASK [keystone logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [memcached logs readme] ***************************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/log/containers/mysql)", > "ok: [localhost] => (item=/var/lib/mysql)", > "", > "TASK [mysql logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [create /var/lib/neutron] *************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/panko)", > "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", > "", > "TASK [panko logs readme] *******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/rabbitmq)", > "changed: [localhost] => (item=/var/log/containers/rabbitmq)", > "", > "TASK [rabbitmq logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", > "...ignoring", > "", > "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/redis)", > "changed: [localhost] => (item=/var/log/containers/redis)", > "ok: [localhost] => (item=/var/run/redis)", > "", > "TASK [redis logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create /var/lib/sahara] **************************************************", > "changed: [localhost]", > "", > "TASK [create persistent sahara logs directory] *********************************", > "changed: [localhost]", > "", > "TASK [sahara logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/srv/node)", > "changed: [localhost] => (item=/var/log/swift)", > "", > "TASK [Create swift logging symlink] ********************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/srv/node)", > "ok: [localhost] => (item=/var/log/swift)", > "ok: [localhost] => (item=/var/log/containers)", > "", > "TASK [Set swift_use_local_disks fact] ******************************************", > "ok: [localhost]", > "", > "TASK [Create Swift d1 directory if needed] *************************************", > "changed: [localhost]", > "", > "TASK [swift logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [Format SwiftRawDisks] ****************************************************", > "", > "TASK [Mount devices defined in SwiftRawDisks] **********************************", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=60 changed=33 unreachable=0 failed=0 ", > "", > "", > "[2018-06-25 05:56:23,574] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0_playbook.yaml", > "", > "[2018-06-25 05:56:23,581] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-25 05:56:23,583] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.json < /var/lib/heat-config/deployed/1e7f0348-d6eb-41ce-b6bf-17e14d2d70c0.notify.json", > "[2018-06-25 05:56:23,985] (heat-config) [INFO] ", > "[2018-06-25 05:56:23,986] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:56:24,051 p=25239 u=mistral | TASK [Check-mode for Run deployment ControllerHostPrepDeployment] ************** >2018-06-25 05:56:24,067 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:24,089 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:56:24,140 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc"}, "changed": false} >2018-06-25 05:56:24,165 p=25239 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy] ******************** >2018-06-25 05:56:24,846 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "32c53cc5ad1b57b40e1db21e4c8fe53ed805308e", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerArtifactsDeploy-da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc", "gid": 0, "group": "root", "md5sum": "f84ecd40011ea0c49394ca57d094aaab", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920584.22-132020212480003/source", "state": "file", "uid": 0} >2018-06-25 05:56:24,870 p=25239 u=mistral | TASK [Check if deployed file exists for ControllerArtifactsDeploy] ************* >2018-06-25 05:56:25,227 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:56:25,252 p=25239 u=mistral | TASK [Check previous deployment rc for ControllerArtifactsDeploy] ************** >2018-06-25 05:56:25,269 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:25,293 p=25239 u=mistral | TASK [Remove deployed file for ControllerArtifactsDeploy when previous deployment failed] *** >2018-06-25 05:56:25,310 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:25,334 p=25239 u=mistral | TASK [Force remove deployed file for ControllerArtifactsDeploy] **************** >2018-06-25 05:56:25,350 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:25,373 p=25239 u=mistral | TASK [Run deployment ControllerArtifactsDeploy] ******************************** >2018-06-25 05:56:26,260 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.notify.json)", "delta": "0:00:00.530444", "end": "2018-06-25 05:56:26.346551", "rc": 0, "start": "2018-06-25 05:56:25.816107", "stderr": "[2018-06-25 05:56:25,841] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.json\n[2018-06-25 05:56:25,874] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:56:25,874] (heat-config) [DEBUG] [2018-06-25 05:56:25,863] (heat-config) [INFO] artifact_urls=\n[2018-06-25 05:56:25,863] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be\n[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-xkma6mui6qhp-ControllerArtifactsDeploy-vazaffzm65br-0-xsxb4z6wqbzi/677c556a-649a-4292-bb7d-166f31972429\n[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:56:25,864] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc\n[2018-06-25 05:56:25,870] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-25 05:56:25,871] (heat-config) [DEBUG] \n[2018-06-25 05:56:25,871] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc\n\n[2018-06-25 05:56:25,874] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:56:25,874] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.json < /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.notify.json\n[2018-06-25 05:56:26,339] (heat-config) [INFO] \n[2018-06-25 05:56:26,340] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:56:25,841] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.json", "[2018-06-25 05:56:25,874] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:56:25,874] (heat-config) [DEBUG] [2018-06-25 05:56:25,863] (heat-config) [INFO] artifact_urls=", "[2018-06-25 05:56:25,863] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", "[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-xkma6mui6qhp-ControllerArtifactsDeploy-vazaffzm65br-0-xsxb4z6wqbzi/677c556a-649a-4292-bb7d-166f31972429", "[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:56:25,864] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc", "[2018-06-25 05:56:25,870] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-25 05:56:25,871] (heat-config) [DEBUG] ", "[2018-06-25 05:56:25,871] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc", "", "[2018-06-25 05:56:25,874] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:56:25,874] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.json < /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.notify.json", "[2018-06-25 05:56:26,339] (heat-config) [INFO] ", "[2018-06-25 05:56:26,340] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:56:26,293 p=25239 u=mistral | TASK [Output for ControllerArtifactsDeploy] ************************************ >2018-06-25 05:56:26,353 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:56:25,841] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.json", > "[2018-06-25 05:56:25,874] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:56:25,874] (heat-config) [DEBUG] [2018-06-25 05:56:25,863] (heat-config) [INFO] artifact_urls=", > "[2018-06-25 05:56:25,863] (heat-config) [INFO] deploy_server_id=36f09d61-d3be-4f36-b08d-65f6c3b139be", > "[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-xkma6mui6qhp-ControllerArtifactsDeploy-vazaffzm65br-0-xsxb4z6wqbzi/677c556a-649a-4292-bb7d-166f31972429", > "[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:56:25,864] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:56:25,864] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc", > "[2018-06-25 05:56:25,870] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-25 05:56:25,871] (heat-config) [DEBUG] ", > "[2018-06-25 05:56:25,871] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc", > "", > "[2018-06-25 05:56:25,874] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:56:25,874] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.json < /var/lib/heat-config/deployed/da4e2a4b-3267-4f9e-a7f8-d8aea9578ebc.notify.json", > "[2018-06-25 05:56:26,339] (heat-config) [INFO] ", > "[2018-06-25 05:56:26,340] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:56:26,379 p=25239 u=mistral | TASK [Check-mode for Run deployment ControllerArtifactsDeploy] ***************** >2018-06-25 05:56:26,395 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:26,419 p=25239 u=mistral | TASK [include] ***************************************************************** >2018-06-25 05:56:26,627 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Compute/deployments.yaml for compute-0 >2018-06-25 05:56:26,635 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Compute/deployments.yaml for compute-0 >2018-06-25 05:56:26,643 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Compute/deployments.yaml for compute-0 >2018-06-25 05:56:26,651 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Compute/deployments.yaml for compute-0 >2018-06-25 05:56:26,660 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Compute/deployments.yaml for compute-0 >2018-06-25 05:56:26,668 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Compute/deployments.yaml for compute-0 >2018-06-25 05:56:26,676 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Compute/deployments.yaml for compute-0 >2018-06-25 05:56:26,684 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/Compute/deployments.yaml for compute-0 >2018-06-25 05:56:26,722 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:56:26,783 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "659df13b-b948-4339-ac23-a23cb8ba0f72"}, "changed": false} >2018-06-25 05:56:26,801 p=25239 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-25 05:56:27,468 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "7fab37dbe995aab592e15a254800a91f103195c6", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-659df13b-b948-4339-ac23-a23cb8ba0f72", "gid": 0, "group": "root", "md5sum": "e1644e6a82927c77bd9416c098b7038c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9259, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920586.86-240640625445299/source", "state": "file", "uid": 0} >2018-06-25 05:56:27,489 p=25239 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-25 05:56:27,843 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:56:27,866 p=25239 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-25 05:56:27,884 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:27,904 p=25239 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-25 05:56:27,922 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:27,941 p=25239 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-25 05:56:27,957 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:27,977 p=25239 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-25 05:56:48,294 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.notify.json)", "delta": "0:00:19.900070", "end": "2018-06-25 05:56:48.377943", "rc": 0, "start": "2018-06-25 05:56:28.477873", "stderr": "[2018-06-25 05:56:28,507] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.json\n[2018-06-25 05:56:47,937] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/25 05:56:28 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/25 05:56:28 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/25 05:56:28 AM] [INFO] Not using any mapping file.\\n[2018/06/25 05:56:29 AM] [INFO] Finding active nics\\n[2018/06/25 05:56:29 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/25 05:56:29 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/25 05:56:29 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/25 05:56:29 AM] [INFO] lo is not an active nic\\n[2018/06/25 05:56:29 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/25 05:56:29 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/25 05:56:29 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/25 05:56:29 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/25 05:56:29 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth0\\n[2018/06/25 05:56:29 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/25 05:56:29 AM] [INFO] adding bridge: br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth1\\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan20\\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan30\\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan50\\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth2\\n[2018/06/25 05:56:29 AM] [INFO] applying network configs...\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/25 05:56:29 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] running ifup on interface: eth2\\n[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth1\\n[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth0\\n[2018/06/25 05:56:34 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:56:38 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:56:42 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-25 05:56:47,937] (heat-config) [DEBUG] [2018-06-25 05:56:28,531] (heat-config) [INFO] interface_name=nic1\n[2018-06-25 05:56:28,531] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88\n[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-bpzjhxhykqff-0-v56vmsgzsasw-NetworkDeployment-n4ftybgflf4b-TripleOSoftwareDeployment-u34bcmzbnl7f/3e1ffd79-d371-4ebd-af70-97651880410e\n[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:56:28,531] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/659df13b-b948-4339-ac23-a23cb8ba0f72\n[2018-06-25 05:56:47,932] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-25 05:56:47,932] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/25 05:56:28 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/25 05:56:28 AM] [INFO] Ifcfg net config provider created.\n[2018/06/25 05:56:28 AM] [INFO] Not using any mapping file.\n[2018/06/25 05:56:29 AM] [INFO] Finding active nics\n[2018/06/25 05:56:29 AM] [INFO] eth0 is an embedded active nic\n[2018/06/25 05:56:29 AM] [INFO] eth1 is an embedded active nic\n[2018/06/25 05:56:29 AM] [INFO] eth2 is an embedded active nic\n[2018/06/25 05:56:29 AM] [INFO] lo is not an active nic\n[2018/06/25 05:56:29 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/25 05:56:29 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/25 05:56:29 AM] [INFO] nic3 mapped to: eth2\n[2018/06/25 05:56:29 AM] [INFO] nic2 mapped to: eth1\n[2018/06/25 05:56:29 AM] [INFO] nic1 mapped to: eth0\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth0\n[2018/06/25 05:56:29 AM] [INFO] adding custom route for interface: eth0\n[2018/06/25 05:56:29 AM] [INFO] adding bridge: br-isolated\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth1\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan20\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan30\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan50\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth2\n[2018/06/25 05:56:29 AM] [INFO] applying network configs...\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth2\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth1\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth0\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/25 05:56:29 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/25 05:56:29 AM] [INFO] running ifup on interface: eth2\n[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth1\n[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth0\n[2018/06/25 05:56:34 AM] [INFO] running ifup on interface: vlan20\n[2018/06/25 05:56:38 AM] [INFO] running ifup on interface: vlan30\n[2018/06/25 05:56:42 AM] [INFO] running ifup on interface: vlan50\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan20\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan30\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-25 05:56:47,932] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/659df13b-b948-4339-ac23-a23cb8ba0f72\n\n[2018-06-25 05:56:47,937] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:56:47,938] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.json < /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.notify.json\n[2018-06-25 05:56:48,370] (heat-config) [INFO] \n[2018-06-25 05:56:48,370] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:56:28,507] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.json", "[2018-06-25 05:56:47,937] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/25 05:56:28 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/25 05:56:28 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/25 05:56:28 AM] [INFO] Not using any mapping file.\\n[2018/06/25 05:56:29 AM] [INFO] Finding active nics\\n[2018/06/25 05:56:29 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/25 05:56:29 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/25 05:56:29 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/25 05:56:29 AM] [INFO] lo is not an active nic\\n[2018/06/25 05:56:29 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/25 05:56:29 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/25 05:56:29 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/25 05:56:29 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/25 05:56:29 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth0\\n[2018/06/25 05:56:29 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/25 05:56:29 AM] [INFO] adding bridge: br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth1\\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan20\\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan30\\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan50\\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth2\\n[2018/06/25 05:56:29 AM] [INFO] applying network configs...\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/25 05:56:29 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] running ifup on interface: eth2\\n[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth1\\n[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth0\\n[2018/06/25 05:56:34 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:56:38 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:56:42 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-25 05:56:47,937] (heat-config) [DEBUG] [2018-06-25 05:56:28,531] (heat-config) [INFO] interface_name=nic1", "[2018-06-25 05:56:28,531] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-bpzjhxhykqff-0-v56vmsgzsasw-NetworkDeployment-n4ftybgflf4b-TripleOSoftwareDeployment-u34bcmzbnl7f/3e1ffd79-d371-4ebd-af70-97651880410e", "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:56:28,531] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/659df13b-b948-4339-ac23-a23cb8ba0f72", "[2018-06-25 05:56:47,932] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-25 05:56:47,932] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/25 05:56:28 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/25 05:56:28 AM] [INFO] Ifcfg net config provider created.", "[2018/06/25 05:56:28 AM] [INFO] Not using any mapping file.", "[2018/06/25 05:56:29 AM] [INFO] Finding active nics", "[2018/06/25 05:56:29 AM] [INFO] eth0 is an embedded active nic", "[2018/06/25 05:56:29 AM] [INFO] eth1 is an embedded active nic", "[2018/06/25 05:56:29 AM] [INFO] eth2 is an embedded active nic", "[2018/06/25 05:56:29 AM] [INFO] lo is not an active nic", "[2018/06/25 05:56:29 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/25 05:56:29 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/25 05:56:29 AM] [INFO] nic3 mapped to: eth2", "[2018/06/25 05:56:29 AM] [INFO] nic2 mapped to: eth1", "[2018/06/25 05:56:29 AM] [INFO] nic1 mapped to: eth0", "[2018/06/25 05:56:29 AM] [INFO] adding interface: eth0", "[2018/06/25 05:56:29 AM] [INFO] adding custom route for interface: eth0", "[2018/06/25 05:56:29 AM] [INFO] adding bridge: br-isolated", "[2018/06/25 05:56:29 AM] [INFO] adding interface: eth1", "[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan20", "[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan30", "[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan50", "[2018/06/25 05:56:29 AM] [INFO] adding interface: eth2", "[2018/06/25 05:56:29 AM] [INFO] applying network configs...", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth2", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth1", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth0", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/25 05:56:29 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/25 05:56:29 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/25 05:56:29 AM] [INFO] running ifup on interface: eth2", "[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth1", "[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth0", "[2018/06/25 05:56:34 AM] [INFO] running ifup on interface: vlan20", "[2018/06/25 05:56:38 AM] [INFO] running ifup on interface: vlan30", "[2018/06/25 05:56:42 AM] [INFO] running ifup on interface: vlan50", "[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan20", "[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan30", "[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-25 05:56:47,932] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/659df13b-b948-4339-ac23-a23cb8ba0f72", "", "[2018-06-25 05:56:47,937] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:56:47,938] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.json < /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.notify.json", "[2018-06-25 05:56:48,370] (heat-config) [INFO] ", "[2018-06-25 05:56:48,370] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:56:48,314 p=25239 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-25 05:56:48,368 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:56:28,507] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.json", > "[2018-06-25 05:56:47,937] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/25 05:56:28 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/25 05:56:28 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/25 05:56:28 AM] [INFO] Not using any mapping file.\\n[2018/06/25 05:56:29 AM] [INFO] Finding active nics\\n[2018/06/25 05:56:29 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/25 05:56:29 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/25 05:56:29 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/25 05:56:29 AM] [INFO] lo is not an active nic\\n[2018/06/25 05:56:29 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/25 05:56:29 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/25 05:56:29 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/25 05:56:29 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/25 05:56:29 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth0\\n[2018/06/25 05:56:29 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/25 05:56:29 AM] [INFO] adding bridge: br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth1\\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan20\\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan30\\n[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan50\\n[2018/06/25 05:56:29 AM] [INFO] adding interface: eth2\\n[2018/06/25 05:56:29 AM] [INFO] applying network configs...\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/25 05:56:29 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/25 05:56:29 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/25 05:56:29 AM] [INFO] running ifup on interface: eth2\\n[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth1\\n[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth0\\n[2018/06/25 05:56:34 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:56:38 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:56:42 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-25 05:56:47,937] (heat-config) [DEBUG] [2018-06-25 05:56:28,531] (heat-config) [INFO] interface_name=nic1", > "[2018-06-25 05:56:28,531] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", > "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-bpzjhxhykqff-0-v56vmsgzsasw-NetworkDeployment-n4ftybgflf4b-TripleOSoftwareDeployment-u34bcmzbnl7f/3e1ffd79-d371-4ebd-af70-97651880410e", > "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:56:28,531] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:56:28,531] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/659df13b-b948-4339-ac23-a23cb8ba0f72", > "[2018-06-25 05:56:47,932] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-25 05:56:47,932] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/25 05:56:28 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/25 05:56:28 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/25 05:56:28 AM] [INFO] Not using any mapping file.", > "[2018/06/25 05:56:29 AM] [INFO] Finding active nics", > "[2018/06/25 05:56:29 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/25 05:56:29 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/25 05:56:29 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/25 05:56:29 AM] [INFO] lo is not an active nic", > "[2018/06/25 05:56:29 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/25 05:56:29 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/25 05:56:29 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/25 05:56:29 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/25 05:56:29 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/25 05:56:29 AM] [INFO] adding interface: eth0", > "[2018/06/25 05:56:29 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/25 05:56:29 AM] [INFO] adding bridge: br-isolated", > "[2018/06/25 05:56:29 AM] [INFO] adding interface: eth1", > "[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan20", > "[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan30", > "[2018/06/25 05:56:29 AM] [INFO] adding vlan: vlan50", > "[2018/06/25 05:56:29 AM] [INFO] adding interface: eth2", > "[2018/06/25 05:56:29 AM] [INFO] applying network configs...", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth2", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/25 05:56:29 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/25 05:56:29 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/25 05:56:29 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/25 05:56:29 AM] [INFO] running ifup on interface: eth2", > "[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth1", > "[2018/06/25 05:56:30 AM] [INFO] running ifup on interface: eth0", > "[2018/06/25 05:56:34 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/25 05:56:38 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/25 05:56:42 AM] [INFO] running ifup on interface: vlan50", > "[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/25 05:56:47 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-25 05:56:47,932] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/659df13b-b948-4339-ac23-a23cb8ba0f72", > "", > "[2018-06-25 05:56:47,937] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:56:47,938] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.json < /var/lib/heat-config/deployed/659df13b-b948-4339-ac23-a23cb8ba0f72.notify.json", > "[2018-06-25 05:56:48,370] (heat-config) [INFO] ", > "[2018-06-25 05:56:48,370] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:56:48,388 p=25239 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-25 05:56:48,402 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:48,419 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:56:48,464 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "0986b480-0bc9-4cd5-9205-a540894e1072"}, "changed": false} >2018-06-25 05:56:48,484 p=25239 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment] ************* >2018-06-25 05:56:49,155 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "a5f220ccbb36cb519f89114a53c169fc1f372e51", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeUpgradeInitDeployment-0986b480-0bc9-4cd5-9205-a540894e1072", "gid": 0, "group": "root", "md5sum": "5cb88427513fb77d7d1b5ed5cc93e4ee", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1182, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920608.53-154433110424812/source", "state": "file", "uid": 0} >2018-06-25 05:56:49,176 p=25239 u=mistral | TASK [Check if deployed file exists for NovaComputeUpgradeInitDeployment] ****** >2018-06-25 05:56:49,530 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:56:49,550 p=25239 u=mistral | TASK [Check previous deployment rc for NovaComputeUpgradeInitDeployment] ******* >2018-06-25 05:56:49,569 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:49,588 p=25239 u=mistral | TASK [Remove deployed file for NovaComputeUpgradeInitDeployment when previous deployment failed] *** >2018-06-25 05:56:49,605 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:49,626 p=25239 u=mistral | TASK [Force remove deployed file for NovaComputeUpgradeInitDeployment] ********* >2018-06-25 05:56:49,642 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:49,661 p=25239 u=mistral | TASK [Run deployment NovaComputeUpgradeInitDeployment] ************************* >2018-06-25 05:56:50,497 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.notify.json)", "delta": "0:00:00.483638", "end": "2018-06-25 05:56:50.594196", "rc": 0, "start": "2018-06-25 05:56:50.110558", "stderr": "[2018-06-25 05:56:50,136] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.json\n[2018-06-25 05:56:50,168] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:56:50,168] (heat-config) [DEBUG] [2018-06-25 05:56:50,159] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88\n[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-bpzjhxhykqff-0-v56vmsgzsasw-NovaComputeUpgradeInitDeployment-ylk2bbygcbga/229eba28-ea1e-4396-970c-2835412d4667\n[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:56:50,160] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0986b480-0bc9-4cd5-9205-a540894e1072\n[2018-06-25 05:56:50,165] (heat-config) [INFO] \n[2018-06-25 05:56:50,165] (heat-config) [DEBUG] \n[2018-06-25 05:56:50,165] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0986b480-0bc9-4cd5-9205-a540894e1072\n\n[2018-06-25 05:56:50,168] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:56:50,169] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.json < /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.notify.json\n[2018-06-25 05:56:50,588] (heat-config) [INFO] \n[2018-06-25 05:56:50,588] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:56:50,136] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.json", "[2018-06-25 05:56:50,168] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:56:50,168] (heat-config) [DEBUG] [2018-06-25 05:56:50,159] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", "[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-bpzjhxhykqff-0-v56vmsgzsasw-NovaComputeUpgradeInitDeployment-ylk2bbygcbga/229eba28-ea1e-4396-970c-2835412d4667", "[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:56:50,160] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0986b480-0bc9-4cd5-9205-a540894e1072", "[2018-06-25 05:56:50,165] (heat-config) [INFO] ", "[2018-06-25 05:56:50,165] (heat-config) [DEBUG] ", "[2018-06-25 05:56:50,165] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0986b480-0bc9-4cd5-9205-a540894e1072", "", "[2018-06-25 05:56:50,168] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:56:50,169] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.json < /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.notify.json", "[2018-06-25 05:56:50,588] (heat-config) [INFO] ", "[2018-06-25 05:56:50,588] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:56:50,518 p=25239 u=mistral | TASK [Output for NovaComputeUpgradeInitDeployment] ***************************** >2018-06-25 05:56:50,567 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:56:50,136] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.json", > "[2018-06-25 05:56:50,168] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:56:50,168] (heat-config) [DEBUG] [2018-06-25 05:56:50,159] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", > "[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-bpzjhxhykqff-0-v56vmsgzsasw-NovaComputeUpgradeInitDeployment-ylk2bbygcbga/229eba28-ea1e-4396-970c-2835412d4667", > "[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:56:50,160] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:56:50,160] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0986b480-0bc9-4cd5-9205-a540894e1072", > "[2018-06-25 05:56:50,165] (heat-config) [INFO] ", > "[2018-06-25 05:56:50,165] (heat-config) [DEBUG] ", > "[2018-06-25 05:56:50,165] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0986b480-0bc9-4cd5-9205-a540894e1072", > "", > "[2018-06-25 05:56:50,168] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:56:50,169] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.json < /var/lib/heat-config/deployed/0986b480-0bc9-4cd5-9205-a540894e1072.notify.json", > "[2018-06-25 05:56:50,588] (heat-config) [INFO] ", > "[2018-06-25 05:56:50,588] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:56:50,588 p=25239 u=mistral | TASK [Check-mode for Run deployment NovaComputeUpgradeInitDeployment] ********** >2018-06-25 05:56:50,603 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:50,621 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:56:50,758 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "a38fbe11-6281-405e-a643-617636ea83b8"}, "changed": false} >2018-06-25 05:56:50,778 p=25239 u=mistral | TASK [Render deployment file for NovaComputeDeployment] ************************ >2018-06-25 05:56:51,619 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "b39caafaf4ab7aae3032346c0631fe7f22da508c", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeDeployment-a38fbe11-6281-405e-a643-617636ea83b8", "gid": 0, "group": "root", "md5sum": "84098c4352b5bee4b3a4712cafa53e51", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21868, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920610.99-10660890634772/source", "state": "file", "uid": 0} >2018-06-25 05:56:51,640 p=25239 u=mistral | TASK [Check if deployed file exists for NovaComputeDeployment] ***************** >2018-06-25 05:56:52,006 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:56:52,029 p=25239 u=mistral | TASK [Check previous deployment rc for NovaComputeDeployment] ****************** >2018-06-25 05:56:52,047 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:52,067 p=25239 u=mistral | TASK [Remove deployed file for NovaComputeDeployment when previous deployment failed] *** >2018-06-25 05:56:52,085 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:52,107 p=25239 u=mistral | TASK [Force remove deployed file for NovaComputeDeployment] ******************** >2018-06-25 05:56:52,126 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:52,147 p=25239 u=mistral | TASK [Run deployment NovaComputeDeployment] ************************************ >2018-06-25 05:56:53,119 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.notify.json)", "delta": "0:00:00.549540", "end": "2018-06-25 05:56:53.219098", "rc": 0, "start": "2018-06-25 05:56:52.669558", "stderr": "[2018-06-25 05:56:52,698] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.json\n[2018-06-25 05:56:52,823] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:56:52,823] (heat-config) [DEBUG] \n[2018-06-25 05:56:52,823] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-25 05:56:52,823] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.json < /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.notify.json\n[2018-06-25 05:56:53,212] (heat-config) [INFO] \n[2018-06-25 05:56:53,213] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:56:52,698] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.json", "[2018-06-25 05:56:52,823] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:56:52,823] (heat-config) [DEBUG] ", "[2018-06-25 05:56:52,823] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-25 05:56:52,823] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.json < /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.notify.json", "[2018-06-25 05:56:53,212] (heat-config) [INFO] ", "[2018-06-25 05:56:53,213] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:56:53,139 p=25239 u=mistral | TASK [Output for NovaComputeDeployment] **************************************** >2018-06-25 05:56:53,188 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:56:52,698] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.json", > "[2018-06-25 05:56:52,823] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:56:52,823] (heat-config) [DEBUG] ", > "[2018-06-25 05:56:52,823] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-25 05:56:52,823] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.json < /var/lib/heat-config/deployed/a38fbe11-6281-405e-a643-617636ea83b8.notify.json", > "[2018-06-25 05:56:53,212] (heat-config) [INFO] ", > "[2018-06-25 05:56:53,213] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:56:53,209 p=25239 u=mistral | TASK [Check-mode for Run deployment NovaComputeDeployment] ********************* >2018-06-25 05:56:53,223 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:53,243 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:56:53,296 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "9797c89c-e652-450e-b000-1cdf8e3fa67a"}, "changed": false} >2018-06-25 05:56:53,315 p=25239 u=mistral | TASK [Render deployment file for ComputeHostsDeployment] *********************** >2018-06-25 05:56:54,009 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "27439f9998b3472b4be022b76fc51c2317734cf7", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostsDeployment-9797c89c-e652-450e-b000-1cdf8e3fa67a", "gid": 0, "group": "root", "md5sum": "fd4546ce685953f4cd384a2605a4b5af", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4081, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920613.42-170102705934672/source", "state": "file", "uid": 0} >2018-06-25 05:56:54,028 p=25239 u=mistral | TASK [Check if deployed file exists for ComputeHostsDeployment] **************** >2018-06-25 05:56:54,423 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:56:54,444 p=25239 u=mistral | TASK [Check previous deployment rc for ComputeHostsDeployment] ***************** >2018-06-25 05:56:54,464 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:54,484 p=25239 u=mistral | TASK [Remove deployed file for ComputeHostsDeployment when previous deployment failed] *** >2018-06-25 05:56:54,504 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:54,524 p=25239 u=mistral | TASK [Force remove deployed file for ComputeHostsDeployment] ******************* >2018-06-25 05:56:54,544 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:54,564 p=25239 u=mistral | TASK [Run deployment ComputeHostsDeployment] *********************************** >2018-06-25 05:56:55,463 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.notify.json)", "delta": "0:00:00.479919", "end": "2018-06-25 05:56:55.535217", "rc": 0, "start": "2018-06-25 05:56:55.055298", "stderr": "[2018-06-25 05:56:55,080] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.json\n[2018-06-25 05:56:55,118] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-25 05:56:55,118] (heat-config) [DEBUG] [2018-06-25 05:56:55,101] (heat-config) [INFO] hosts=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88\n[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-wj5zambhcrlz-0-fxnoej7rwwsr/0eb56554-edbb-4a82-b7cc-710e57424a0a\n[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:56:55,102] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/9797c89c-e652-450e-b000-1cdf8e3fa67a\n[2018-06-25 05:56:55,115] (heat-config) [INFO] \n[2018-06-25 05:56:55,115] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-25 05:56:55,115] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/9797c89c-e652-450e-b000-1cdf8e3fa67a\n\n[2018-06-25 05:56:55,118] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:56:55,119] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.json < /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.notify.json\n[2018-06-25 05:56:55,528] (heat-config) [INFO] \n[2018-06-25 05:56:55,528] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:56:55,080] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.json", "[2018-06-25 05:56:55,118] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-25 05:56:55,118] (heat-config) [DEBUG] [2018-06-25 05:56:55,101] (heat-config) [INFO] hosts=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-wj5zambhcrlz-0-fxnoej7rwwsr/0eb56554-edbb-4a82-b7cc-710e57424a0a", "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:56:55,102] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/9797c89c-e652-450e-b000-1cdf8e3fa67a", "[2018-06-25 05:56:55,115] (heat-config) [INFO] ", "[2018-06-25 05:56:55,115] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-25 05:56:55,115] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/9797c89c-e652-450e-b000-1cdf8e3fa67a", "", "[2018-06-25 05:56:55,118] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:56:55,119] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.json < /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.notify.json", "[2018-06-25 05:56:55,528] (heat-config) [INFO] ", "[2018-06-25 05:56:55,528] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:56:55,492 p=25239 u=mistral | TASK [Output for ComputeHostsDeployment] *************************************** >2018-06-25 05:56:55,606 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:56:55,080] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.json", > "[2018-06-25 05:56:55,118] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-25 05:56:55,118] (heat-config) [DEBUG] [2018-06-25 05:56:55,101] (heat-config) [INFO] hosts=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", > "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-wj5zambhcrlz-0-fxnoej7rwwsr/0eb56554-edbb-4a82-b7cc-710e57424a0a", > "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:56:55,102] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:56:55,102] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/9797c89c-e652-450e-b000-1cdf8e3fa67a", > "[2018-06-25 05:56:55,115] (heat-config) [INFO] ", > "[2018-06-25 05:56:55,115] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-25 05:56:55,115] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/9797c89c-e652-450e-b000-1cdf8e3fa67a", > "", > "[2018-06-25 05:56:55,118] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:56:55,119] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.json < /var/lib/heat-config/deployed/9797c89c-e652-450e-b000-1cdf8e3fa67a.notify.json", > "[2018-06-25 05:56:55,528] (heat-config) [INFO] ", > "[2018-06-25 05:56:55,528] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:56:55,634 p=25239 u=mistral | TASK [Check-mode for Run deployment ComputeHostsDeployment] ******************** >2018-06-25 05:56:55,649 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:55,668 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:56:55,878 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "1816ae23-7cfa-437b-8db9-9c9a5cb82d71"}, "changed": false} >2018-06-25 05:56:55,897 p=25239 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment] ******************** >2018-06-25 05:56:56,648 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "b1c5c2539be3227e5f6af4d98858d4c750e501b4", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesDeployment-1816ae23-7cfa-437b-8db9-9c9a5cb82d71", "gid": 0, "group": "root", "md5sum": "4bbef49eebb90ff90b7ee3a8707a99e4", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19024, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920616.05-171084853562111/source", "state": "file", "uid": 0} >2018-06-25 05:56:56,666 p=25239 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesDeployment] ************* >2018-06-25 05:56:57,012 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:56:57,035 p=25239 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesDeployment] ************** >2018-06-25 05:56:57,052 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:57,072 p=25239 u=mistral | TASK [Remove deployed file for ComputeAllNodesDeployment when previous deployment failed] *** >2018-06-25 05:56:57,091 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:57,113 p=25239 u=mistral | TASK [Force remove deployed file for ComputeAllNodesDeployment] **************** >2018-06-25 05:56:57,132 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:57,150 p=25239 u=mistral | TASK [Run deployment ComputeAllNodesDeployment] ******************************** >2018-06-25 05:56:58,109 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.notify.json)", "delta": "0:00:00.590992", "end": "2018-06-25 05:56:58.204555", "rc": 0, "start": "2018-06-25 05:56:57.613563", "stderr": "[2018-06-25 05:56:57,640] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.json\n[2018-06-25 05:56:57,752] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:56:57,753] (heat-config) [DEBUG] \n[2018-06-25 05:56:57,753] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-25 05:56:57,753] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.json < /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.notify.json\n[2018-06-25 05:56:58,198] (heat-config) [INFO] \n[2018-06-25 05:56:58,198] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:56:57,640] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.json", "[2018-06-25 05:56:57,752] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:56:57,753] (heat-config) [DEBUG] ", "[2018-06-25 05:56:57,753] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-25 05:56:57,753] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.json < /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.notify.json", "[2018-06-25 05:56:58,198] (heat-config) [INFO] ", "[2018-06-25 05:56:58,198] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:56:58,130 p=25239 u=mistral | TASK [Output for ComputeAllNodesDeployment] ************************************ >2018-06-25 05:56:58,179 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:56:57,640] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.json", > "[2018-06-25 05:56:57,752] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:56:57,753] (heat-config) [DEBUG] ", > "[2018-06-25 05:56:57,753] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-25 05:56:57,753] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.json < /var/lib/heat-config/deployed/1816ae23-7cfa-437b-8db9-9c9a5cb82d71.notify.json", > "[2018-06-25 05:56:58,198] (heat-config) [INFO] ", > "[2018-06-25 05:56:58,198] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:56:58,199 p=25239 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesDeployment] ***************** >2018-06-25 05:56:58,213 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:58,231 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:56:58,289 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "8d3a373a-9cce-4df4-aaa3-eee92483547c"}, "changed": false} >2018-06-25 05:56:58,311 p=25239 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment] ********** >2018-06-25 05:56:58,987 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f5ab5b6e90abcc8dcea4d02b17938639641ec804", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesValidationDeployment-8d3a373a-9cce-4df4-aaa3-eee92483547c", "gid": 0, "group": "root", "md5sum": "5bc31712f1d1704731990fa0ec43d3c8", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4935, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920618.37-50478491193526/source", "state": "file", "uid": 0} >2018-06-25 05:56:59,007 p=25239 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesValidationDeployment] *** >2018-06-25 05:56:59,377 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:56:59,400 p=25239 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesValidationDeployment] **** >2018-06-25 05:56:59,418 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:59,438 p=25239 u=mistral | TASK [Remove deployed file for ComputeAllNodesValidationDeployment when previous deployment failed] *** >2018-06-25 05:56:59,456 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:59,477 p=25239 u=mistral | TASK [Force remove deployed file for ComputeAllNodesValidationDeployment] ****** >2018-06-25 05:56:59,495 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:56:59,513 p=25239 u=mistral | TASK [Run deployment ComputeAllNodesValidationDeployment] ********************** >2018-06-25 05:57:00,921 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.notify.json)", "delta": "0:00:01.032143", "end": "2018-06-25 05:57:01.014658", "rc": 0, "start": "2018-06-25 05:56:59.982515", "stderr": "[2018-06-25 05:57:00,009] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.json\n[2018-06-25 05:57:00,608] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.12 for local network 172.17.1.0/24.\\nPing to 172.17.1.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.16 for local network 172.17.2.0/24.\\nPing to 172.17.2.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\\nPing to 172.17.3.10 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\\nPing to 192.168.24.14 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:00,608] (heat-config) [DEBUG] [2018-06-25 05:57:00,032] (heat-config) [INFO] ping_test_ips=172.17.3.10 172.17.4.15 172.17.1.12 172.17.2.16 10.0.0.106 192.168.24.14\n[2018-06-25 05:57:00,032] (heat-config) [INFO] validate_fqdn=False\n[2018-06-25 05:57:00,032] (heat-config) [INFO] validate_ntp=True\n[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88\n[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-v3mtd42a2rhq-0-nowmybte64fr/01095d5f-33a9-4d8d-85cc-0ab681849953\n[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:57:00,033] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8d3a373a-9cce-4df4-aaa3-eee92483547c\n[2018-06-25 05:57:00,604] (heat-config) [INFO] Trying to ping 172.17.1.12 for local network 172.17.1.0/24.\nPing to 172.17.1.12 succeeded.\nSUCCESS\nTrying to ping 172.17.2.16 for local network 172.17.2.0/24.\nPing to 172.17.2.16 succeeded.\nSUCCESS\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\nPing to 172.17.3.10 succeeded.\nSUCCESS\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\nPing to 192.168.24.14 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nSUCCESS\n\n[2018-06-25 05:57:00,604] (heat-config) [DEBUG] \n[2018-06-25 05:57:00,604] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8d3a373a-9cce-4df4-aaa3-eee92483547c\n\n[2018-06-25 05:57:00,608] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:57:00,609] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.json < /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.notify.json\n[2018-06-25 05:57:01,008] (heat-config) [INFO] \n[2018-06-25 05:57:01,008] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:00,009] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.json", "[2018-06-25 05:57:00,608] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.12 for local network 172.17.1.0/24.\\nPing to 172.17.1.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.16 for local network 172.17.2.0/24.\\nPing to 172.17.2.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\\nPing to 172.17.3.10 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\\nPing to 192.168.24.14 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:00,608] (heat-config) [DEBUG] [2018-06-25 05:57:00,032] (heat-config) [INFO] ping_test_ips=172.17.3.10 172.17.4.15 172.17.1.12 172.17.2.16 10.0.0.106 192.168.24.14", "[2018-06-25 05:57:00,032] (heat-config) [INFO] validate_fqdn=False", "[2018-06-25 05:57:00,032] (heat-config) [INFO] validate_ntp=True", "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-v3mtd42a2rhq-0-nowmybte64fr/01095d5f-33a9-4d8d-85cc-0ab681849953", "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:57:00,033] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8d3a373a-9cce-4df4-aaa3-eee92483547c", "[2018-06-25 05:57:00,604] (heat-config) [INFO] Trying to ping 172.17.1.12 for local network 172.17.1.0/24.", "Ping to 172.17.1.12 succeeded.", "SUCCESS", "Trying to ping 172.17.2.16 for local network 172.17.2.0/24.", "Ping to 172.17.2.16 succeeded.", "SUCCESS", "Trying to ping 172.17.3.10 for local network 172.17.3.0/24.", "Ping to 172.17.3.10 succeeded.", "SUCCESS", "Trying to ping 192.168.24.14 for local network 192.168.24.0/24.", "Ping to 192.168.24.14 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "SUCCESS", "", "[2018-06-25 05:57:00,604] (heat-config) [DEBUG] ", "[2018-06-25 05:57:00,604] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8d3a373a-9cce-4df4-aaa3-eee92483547c", "", "[2018-06-25 05:57:00,608] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:57:00,609] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.json < /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.notify.json", "[2018-06-25 05:57:01,008] (heat-config) [INFO] ", "[2018-06-25 05:57:01,008] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:00,942 p=25239 u=mistral | TASK [Output for ComputeAllNodesValidationDeployment] ************************** >2018-06-25 05:57:00,991 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:00,009] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.json", > "[2018-06-25 05:57:00,608] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.12 for local network 172.17.1.0/24.\\nPing to 172.17.1.12 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.16 for local network 172.17.2.0/24.\\nPing to 172.17.2.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\\nPing to 172.17.3.10 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\\nPing to 192.168.24.14 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:00,608] (heat-config) [DEBUG] [2018-06-25 05:57:00,032] (heat-config) [INFO] ping_test_ips=172.17.3.10 172.17.4.15 172.17.1.12 172.17.2.16 10.0.0.106 192.168.24.14", > "[2018-06-25 05:57:00,032] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-25 05:57:00,032] (heat-config) [INFO] validate_ntp=True", > "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", > "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-v3mtd42a2rhq-0-nowmybte64fr/01095d5f-33a9-4d8d-85cc-0ab681849953", > "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:57:00,032] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:57:00,033] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8d3a373a-9cce-4df4-aaa3-eee92483547c", > "[2018-06-25 05:57:00,604] (heat-config) [INFO] Trying to ping 172.17.1.12 for local network 172.17.1.0/24.", > "Ping to 172.17.1.12 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.16 for local network 172.17.2.0/24.", > "Ping to 172.17.2.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.10 for local network 172.17.3.0/24.", > "Ping to 172.17.3.10 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.14 for local network 192.168.24.0/24.", > "Ping to 192.168.24.14 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "SUCCESS", > "", > "[2018-06-25 05:57:00,604] (heat-config) [DEBUG] ", > "[2018-06-25 05:57:00,604] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8d3a373a-9cce-4df4-aaa3-eee92483547c", > "", > "[2018-06-25 05:57:00,608] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:57:00,609] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.json < /var/lib/heat-config/deployed/8d3a373a-9cce-4df4-aaa3-eee92483547c.notify.json", > "[2018-06-25 05:57:01,008] (heat-config) [INFO] ", > "[2018-06-25 05:57:01,008] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:01,012 p=25239 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesValidationDeployment] ******* >2018-06-25 05:57:01,028 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:01,047 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:01,133 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "8a198975-d6d1-45b6-b139-a7a4a9172d60"}, "changed": false} >2018-06-25 05:57:01,152 p=25239 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment] ******************** >2018-06-25 05:57:01,827 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "a6cdbb67d7f5fb85d496f724a2ee6a2973b96712", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostPrepDeployment-8a198975-d6d1-45b6-b139-a7a4a9172d60", "gid": 0, "group": "root", "md5sum": "0d71e27106d4ff901b5a376b231baaf6", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 33672, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920621.24-274020678140366/source", "state": "file", "uid": 0} >2018-06-25 05:57:01,846 p=25239 u=mistral | TASK [Check if deployed file exists for ComputeHostPrepDeployment] ************* >2018-06-25 05:57:02,197 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:02,216 p=25239 u=mistral | TASK [Check previous deployment rc for ComputeHostPrepDeployment] ************** >2018-06-25 05:57:02,234 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:02,253 p=25239 u=mistral | TASK [Remove deployed file for ComputeHostPrepDeployment when previous deployment failed] *** >2018-06-25 05:57:02,270 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:02,289 p=25239 u=mistral | TASK [Force remove deployed file for ComputeHostPrepDeployment] **************** >2018-06-25 05:57:02,306 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:02,326 p=25239 u=mistral | TASK [Run deployment ComputeHostPrepDeployment] ******************************** >2018-06-25 05:57:13,426 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.notify.json)", "delta": "0:00:10.729526", "end": "2018-06-25 05:57:13.516181", "rc": 0, "start": "2018-06-25 05:57:02.786655", "stderr": "[2018-06-25 05:57:02,814] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.json\n[2018-06-25 05:57:13,081] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:13,082] (heat-config) [DEBUG] [2018-06-25 05:57:02,835] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/8a198975-d6d1-45b6-b139-a7a4a9172d60_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/8a198975-d6d1-45b6-b139-a7a4a9172d60_variables.json\n[2018-06-25 05:57:13,077] (heat-config) [INFO] Return code 0\n[2018-06-25 05:57:13,077] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [Mount Nova NFS Share] ****************************************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/nova)\nok: [localhost] => (item=/var/lib/libvirt)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [is Instance HA enabled] **************************************************\nok: [localhost]\n\nTASK [prepare Instance HA script directory] ************************************\nskipping: [localhost]\n\nTASK [install Instance HA script that runs nova-compute] ***********************\nskipping: [localhost]\n\nTASK [Get list of instance HA compute nodes] ***********************************\nskipping: [localhost]\n\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\nskipping: [localhost]\n\nTASK [create libvirt persistent data directories] ******************************\nok: [localhost] => (item=/etc/libvirt)\nok: [localhost] => (item=/etc/libvirt/secrets)\nok: [localhost] => (item=/etc/libvirt/qemu)\nok: [localhost] => (item=/var/lib/libvirt)\nchanged: [localhost] => (item=/var/log/containers/libvirt)\n\nTASK [ensure qemu group is present on the host] ********************************\nok: [localhost]\n\nTASK [ensure qemu user is present on the host] *********************************\nok: [localhost]\n\nTASK [create directory for vhost-user sockets with qemu ownership] *************\nchanged: [localhost]\n\nTASK [check if libvirt is installed] *******************************************\nchanged: [localhost]\n\nTASK [make sure libvirt services are disabled] *********************************\nchanged: [localhost] => (item=libvirtd.service)\nchanged: [localhost] => (item=virtlogd.socket)\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \n\n\n[2018-06-25 05:57:13,077] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\ncan add warn=False to this command task or set command_warnings=False in\nansible.cfg to get rid of this message.\n\n[2018-06-25 05:57:13,077] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/8a198975-d6d1-45b6-b139-a7a4a9172d60_playbook.yaml\n\n[2018-06-25 05:57:13,082] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-25 05:57:13,082] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.json < /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.notify.json\n[2018-06-25 05:57:13,509] (heat-config) [INFO] \n[2018-06-25 05:57:13,509] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:02,814] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.json", "[2018-06-25 05:57:13,081] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:13,082] (heat-config) [DEBUG] [2018-06-25 05:57:02,835] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/8a198975-d6d1-45b6-b139-a7a4a9172d60_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/8a198975-d6d1-45b6-b139-a7a4a9172d60_variables.json", "[2018-06-25 05:57:13,077] (heat-config) [INFO] Return code 0", "[2018-06-25 05:57:13,077] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [Mount Nova NFS Share] ****************************************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/nova)", "ok: [localhost] => (item=/var/lib/libvirt)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [is Instance HA enabled] **************************************************", "ok: [localhost]", "", "TASK [prepare Instance HA script directory] ************************************", "skipping: [localhost]", "", "TASK [install Instance HA script that runs nova-compute] ***********************", "skipping: [localhost]", "", "TASK [Get list of instance HA compute nodes] ***********************************", "skipping: [localhost]", "", "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", "skipping: [localhost]", "", "TASK [create libvirt persistent data directories] ******************************", "ok: [localhost] => (item=/etc/libvirt)", "ok: [localhost] => (item=/etc/libvirt/secrets)", "ok: [localhost] => (item=/etc/libvirt/qemu)", "ok: [localhost] => (item=/var/lib/libvirt)", "changed: [localhost] => (item=/var/log/containers/libvirt)", "", "TASK [ensure qemu group is present on the host] ********************************", "ok: [localhost]", "", "TASK [ensure qemu user is present on the host] *********************************", "ok: [localhost]", "", "TASK [create directory for vhost-user sockets with qemu ownership] *************", "changed: [localhost]", "", "TASK [check if libvirt is installed] *******************************************", "changed: [localhost]", "", "TASK [make sure libvirt services are disabled] *********************************", "changed: [localhost] => (item=libvirtd.service)", "changed: [localhost] => (item=virtlogd.socket)", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=20 changed=12 unreachable=0 failed=0 ", "", "", "[2018-06-25 05:57:13,077] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", "rpm. If you need to use command because yum, dnf or zypper is insufficient you", "can add warn=False to this command task or set command_warnings=False in", "ansible.cfg to get rid of this message.", "", "[2018-06-25 05:57:13,077] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/8a198975-d6d1-45b6-b139-a7a4a9172d60_playbook.yaml", "", "[2018-06-25 05:57:13,082] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-25 05:57:13,082] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.json < /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.notify.json", "[2018-06-25 05:57:13,509] (heat-config) [INFO] ", "[2018-06-25 05:57:13,509] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:13,448 p=25239 u=mistral | TASK [Output for ComputeHostPrepDeployment] ************************************ >2018-06-25 05:57:13,498 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:02,814] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.json", > "[2018-06-25 05:57:13,081] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:13,082] (heat-config) [DEBUG] [2018-06-25 05:57:02,835] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/8a198975-d6d1-45b6-b139-a7a4a9172d60_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/8a198975-d6d1-45b6-b139-a7a4a9172d60_variables.json", > "[2018-06-25 05:57:13,077] (heat-config) [INFO] Return code 0", > "[2018-06-25 05:57:13,077] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [Mount Nova NFS Share] ****************************************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/nova)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [is Instance HA enabled] **************************************************", > "ok: [localhost]", > "", > "TASK [prepare Instance HA script directory] ************************************", > "skipping: [localhost]", > "", > "TASK [install Instance HA script that runs nova-compute] ***********************", > "skipping: [localhost]", > "", > "TASK [Get list of instance HA compute nodes] ***********************************", > "skipping: [localhost]", > "", > "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", > "skipping: [localhost]", > "", > "TASK [create libvirt persistent data directories] ******************************", > "ok: [localhost] => (item=/etc/libvirt)", > "ok: [localhost] => (item=/etc/libvirt/secrets)", > "ok: [localhost] => (item=/etc/libvirt/qemu)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "changed: [localhost] => (item=/var/log/containers/libvirt)", > "", > "TASK [ensure qemu group is present on the host] ********************************", > "ok: [localhost]", > "", > "TASK [ensure qemu user is present on the host] *********************************", > "ok: [localhost]", > "", > "TASK [create directory for vhost-user sockets with qemu ownership] *************", > "changed: [localhost]", > "", > "TASK [check if libvirt is installed] *******************************************", > "changed: [localhost]", > "", > "TASK [make sure libvirt services are disabled] *********************************", > "changed: [localhost] => (item=libvirtd.service)", > "changed: [localhost] => (item=virtlogd.socket)", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=20 changed=12 unreachable=0 failed=0 ", > "", > "", > "[2018-06-25 05:57:13,077] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", > "rpm. If you need to use command because yum, dnf or zypper is insufficient you", > "can add warn=False to this command task or set command_warnings=False in", > "ansible.cfg to get rid of this message.", > "", > "[2018-06-25 05:57:13,077] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/8a198975-d6d1-45b6-b139-a7a4a9172d60_playbook.yaml", > "", > "[2018-06-25 05:57:13,082] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-25 05:57:13,082] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.json < /var/lib/heat-config/deployed/8a198975-d6d1-45b6-b139-a7a4a9172d60.notify.json", > "[2018-06-25 05:57:13,509] (heat-config) [INFO] ", > "[2018-06-25 05:57:13,509] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:13,520 p=25239 u=mistral | TASK [Check-mode for Run deployment ComputeHostPrepDeployment] ***************** >2018-06-25 05:57:13,535 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:13,554 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:13,604 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "c65b55ef-1bb6-4c30-944e-52f7e111b334"}, "changed": false} >2018-06-25 05:57:13,624 p=25239 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy] *********************** >2018-06-25 05:57:14,308 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "bd36cec238fc7f8909648ba2fafe8def614c4aa0", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeArtifactsDeploy-c65b55ef-1bb6-4c30-944e-52f7e111b334", "gid": 0, "group": "root", "md5sum": "38bf3da708c25a8e673df26576f11719", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2015, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920633.68-80548624939683/source", "state": "file", "uid": 0} >2018-06-25 05:57:14,328 p=25239 u=mistral | TASK [Check if deployed file exists for ComputeArtifactsDeploy] **************** >2018-06-25 05:57:14,727 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:14,747 p=25239 u=mistral | TASK [Check previous deployment rc for ComputeArtifactsDeploy] ***************** >2018-06-25 05:57:14,766 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:14,785 p=25239 u=mistral | TASK [Remove deployed file for ComputeArtifactsDeploy when previous deployment failed] *** >2018-06-25 05:57:14,805 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:14,825 p=25239 u=mistral | TASK [Force remove deployed file for ComputeArtifactsDeploy] ******************* >2018-06-25 05:57:14,842 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:14,860 p=25239 u=mistral | TASK [Run deployment ComputeArtifactsDeploy] *********************************** >2018-06-25 05:57:15,773 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.notify.json)", "delta": "0:00:00.476697", "end": "2018-06-25 05:57:15.834315", "rc": 0, "start": "2018-06-25 05:57:15.357618", "stderr": "[2018-06-25 05:57:15,384] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.json\n[2018-06-25 05:57:15,415] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:15,415] (heat-config) [DEBUG] [2018-06-25 05:57:15,405] (heat-config) [INFO] artifact_urls=\n[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88\n[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-xkma6mui6qhp-ComputeArtifactsDeploy-vrekg4baqixi-0-rwdvmnewmvjl/c264da6d-99bb-4f49-87b9-cb2501e60a62\n[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:57:15,406] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c65b55ef-1bb6-4c30-944e-52f7e111b334\n[2018-06-25 05:57:15,412] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-25 05:57:15,412] (heat-config) [DEBUG] \n[2018-06-25 05:57:15,412] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c65b55ef-1bb6-4c30-944e-52f7e111b334\n\n[2018-06-25 05:57:15,415] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:57:15,416] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.json < /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.notify.json\n[2018-06-25 05:57:15,827] (heat-config) [INFO] \n[2018-06-25 05:57:15,828] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:15,384] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.json", "[2018-06-25 05:57:15,415] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:15,415] (heat-config) [DEBUG] [2018-06-25 05:57:15,405] (heat-config) [INFO] artifact_urls=", "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-xkma6mui6qhp-ComputeArtifactsDeploy-vrekg4baqixi-0-rwdvmnewmvjl/c264da6d-99bb-4f49-87b9-cb2501e60a62", "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:57:15,406] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c65b55ef-1bb6-4c30-944e-52f7e111b334", "[2018-06-25 05:57:15,412] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-25 05:57:15,412] (heat-config) [DEBUG] ", "[2018-06-25 05:57:15,412] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c65b55ef-1bb6-4c30-944e-52f7e111b334", "", "[2018-06-25 05:57:15,415] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:57:15,416] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.json < /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.notify.json", "[2018-06-25 05:57:15,827] (heat-config) [INFO] ", "[2018-06-25 05:57:15,828] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:15,792 p=25239 u=mistral | TASK [Output for ComputeArtifactsDeploy] *************************************** >2018-06-25 05:57:15,843 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:15,384] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.json", > "[2018-06-25 05:57:15,415] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:15,415] (heat-config) [DEBUG] [2018-06-25 05:57:15,405] (heat-config) [INFO] artifact_urls=", > "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_server_id=8c72c6a7-e03d-47d0-9bdc-b75a20bdbd88", > "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-xkma6mui6qhp-ComputeArtifactsDeploy-vrekg4baqixi-0-rwdvmnewmvjl/c264da6d-99bb-4f49-87b9-cb2501e60a62", > "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:57:15,406] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:57:15,406] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c65b55ef-1bb6-4c30-944e-52f7e111b334", > "[2018-06-25 05:57:15,412] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-25 05:57:15,412] (heat-config) [DEBUG] ", > "[2018-06-25 05:57:15,412] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c65b55ef-1bb6-4c30-944e-52f7e111b334", > "", > "[2018-06-25 05:57:15,415] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:57:15,416] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.json < /var/lib/heat-config/deployed/c65b55ef-1bb6-4c30-944e-52f7e111b334.notify.json", > "[2018-06-25 05:57:15,827] (heat-config) [INFO] ", > "[2018-06-25 05:57:15,828] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:15,863 p=25239 u=mistral | TASK [Check-mode for Run deployment ComputeArtifactsDeploy] ******************** >2018-06-25 05:57:15,878 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:15,900 p=25239 u=mistral | TASK [include] ***************************************************************** >2018-06-25 05:57:15,988 p=25239 u=mistral | TASK [include] ***************************************************************** >2018-06-25 05:57:16,077 p=25239 u=mistral | TASK [include] ***************************************************************** >2018-06-25 05:57:16,305 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/CephStorage/deployments.yaml for ceph-0 >2018-06-25 05:57:16,315 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/CephStorage/deployments.yaml for ceph-0 >2018-06-25 05:57:16,322 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/CephStorage/deployments.yaml for ceph-0 >2018-06-25 05:57:16,330 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/CephStorage/deployments.yaml for ceph-0 >2018-06-25 05:57:16,338 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/CephStorage/deployments.yaml for ceph-0 >2018-06-25 05:57:16,345 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/CephStorage/deployments.yaml for ceph-0 >2018-06-25 05:57:16,353 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/CephStorage/deployments.yaml for ceph-0 >2018-06-25 05:57:16,361 p=25239 u=mistral | included: /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/CephStorage/deployments.yaml for ceph-0 >2018-06-25 05:57:16,426 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:16,484 p=25239 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "433912bc-b12b-4e77-906a-673041de3f0c"}, "changed": false} >2018-06-25 05:57:16,502 p=25239 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-25 05:57:17,082 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "3f8be50e5903e154ea77c1846020c12fd2c63b70", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-433912bc-b12b-4e77-906a-673041de3f0c", "gid": 0, "group": "root", "md5sum": "5284a9f4bab32430807911d0b8ce1943", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8777, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920636.56-25023183508854/source", "state": "file", "uid": 0} >2018-06-25 05:57:17,101 p=25239 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-25 05:57:17,402 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:17,422 p=25239 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-25 05:57:17,440 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:17,458 p=25239 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-25 05:57:17,476 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:17,495 p=25239 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-25 05:57:17,511 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:17,529 p=25239 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-25 05:57:32,660 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.notify.json)", "delta": "0:00:14.807105", "end": "2018-06-25 05:57:32.753559", "rc": 0, "start": "2018-06-25 05:57:17.946454", "stderr": "[2018-06-25 05:57:17,969] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.json\n[2018-06-25 05:57:32,346] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/25 05:57:18 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/25 05:57:18 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/25 05:57:18 AM] [INFO] Not using any mapping file.\\n[2018/06/25 05:57:18 AM] [INFO] Finding active nics\\n[2018/06/25 05:57:18 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/25 05:57:18 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/25 05:57:18 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/25 05:57:18 AM] [INFO] lo is not an active nic\\n[2018/06/25 05:57:18 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/25 05:57:18 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/25 05:57:18 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/25 05:57:18 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/25 05:57:18 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/25 05:57:18 AM] [INFO] adding interface: eth0\\n[2018/06/25 05:57:18 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/25 05:57:18 AM] [INFO] adding bridge: br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] adding interface: eth1\\n[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan30\\n[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan40\\n[2018/06/25 05:57:18 AM] [INFO] applying network configs...\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/25 05:57:18 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth1\\n[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth0\\n[2018/06/25 05:57:23 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:57:27 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:32,346] (heat-config) [DEBUG] [2018-06-25 05:57:17,990] (heat-config) [INFO] interface_name=nic1\n[2018-06-25 05:57:17,990] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548\n[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-4yxoaen2f3hz-0-uuz6fpm5xioj-NetworkDeployment-l6slsmgwx5eq-TripleOSoftwareDeployment-sgmgriab7nfp/5ec78f02-8ba0-4b88-b970-cc3955f65ea2\n[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:57:17,991] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/433912bc-b12b-4e77-906a-673041de3f0c\n[2018-06-25 05:57:32,342] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-25 05:57:32,342] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/25 05:57:18 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/25 05:57:18 AM] [INFO] Ifcfg net config provider created.\n[2018/06/25 05:57:18 AM] [INFO] Not using any mapping file.\n[2018/06/25 05:57:18 AM] [INFO] Finding active nics\n[2018/06/25 05:57:18 AM] [INFO] eth0 is an embedded active nic\n[2018/06/25 05:57:18 AM] [INFO] eth1 is an embedded active nic\n[2018/06/25 05:57:18 AM] [INFO] eth2 is an embedded active nic\n[2018/06/25 05:57:18 AM] [INFO] lo is not an active nic\n[2018/06/25 05:57:18 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/25 05:57:18 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/25 05:57:18 AM] [INFO] nic3 mapped to: eth2\n[2018/06/25 05:57:18 AM] [INFO] nic2 mapped to: eth1\n[2018/06/25 05:57:18 AM] [INFO] nic1 mapped to: eth0\n[2018/06/25 05:57:18 AM] [INFO] adding interface: eth0\n[2018/06/25 05:57:18 AM] [INFO] adding custom route for interface: eth0\n[2018/06/25 05:57:18 AM] [INFO] adding bridge: br-isolated\n[2018/06/25 05:57:18 AM] [INFO] adding interface: eth1\n[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan30\n[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan40\n[2018/06/25 05:57:18 AM] [INFO] applying network configs...\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth1\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth0\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/25 05:57:18 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth1\n[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth0\n[2018/06/25 05:57:23 AM] [INFO] running ifup on interface: vlan30\n[2018/06/25 05:57:27 AM] [INFO] running ifup on interface: vlan40\n[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan30\n[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan40\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-25 05:57:32,342] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/433912bc-b12b-4e77-906a-673041de3f0c\n\n[2018-06-25 05:57:32,346] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:57:32,347] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.json < /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.notify.json\n[2018-06-25 05:57:32,747] (heat-config) [INFO] \n[2018-06-25 05:57:32,748] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:17,969] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.json", "[2018-06-25 05:57:32,346] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/25 05:57:18 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/25 05:57:18 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/25 05:57:18 AM] [INFO] Not using any mapping file.\\n[2018/06/25 05:57:18 AM] [INFO] Finding active nics\\n[2018/06/25 05:57:18 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/25 05:57:18 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/25 05:57:18 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/25 05:57:18 AM] [INFO] lo is not an active nic\\n[2018/06/25 05:57:18 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/25 05:57:18 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/25 05:57:18 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/25 05:57:18 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/25 05:57:18 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/25 05:57:18 AM] [INFO] adding interface: eth0\\n[2018/06/25 05:57:18 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/25 05:57:18 AM] [INFO] adding bridge: br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] adding interface: eth1\\n[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan30\\n[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan40\\n[2018/06/25 05:57:18 AM] [INFO] applying network configs...\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/25 05:57:18 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth1\\n[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth0\\n[2018/06/25 05:57:23 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:57:27 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:32,346] (heat-config) [DEBUG] [2018-06-25 05:57:17,990] (heat-config) [INFO] interface_name=nic1", "[2018-06-25 05:57:17,990] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-4yxoaen2f3hz-0-uuz6fpm5xioj-NetworkDeployment-l6slsmgwx5eq-TripleOSoftwareDeployment-sgmgriab7nfp/5ec78f02-8ba0-4b88-b970-cc3955f65ea2", "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:57:17,991] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/433912bc-b12b-4e77-906a-673041de3f0c", "[2018-06-25 05:57:32,342] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-25 05:57:32,342] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/25 05:57:18 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/25 05:57:18 AM] [INFO] Ifcfg net config provider created.", "[2018/06/25 05:57:18 AM] [INFO] Not using any mapping file.", "[2018/06/25 05:57:18 AM] [INFO] Finding active nics", "[2018/06/25 05:57:18 AM] [INFO] eth0 is an embedded active nic", "[2018/06/25 05:57:18 AM] [INFO] eth1 is an embedded active nic", "[2018/06/25 05:57:18 AM] [INFO] eth2 is an embedded active nic", "[2018/06/25 05:57:18 AM] [INFO] lo is not an active nic", "[2018/06/25 05:57:18 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/25 05:57:18 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/25 05:57:18 AM] [INFO] nic3 mapped to: eth2", "[2018/06/25 05:57:18 AM] [INFO] nic2 mapped to: eth1", "[2018/06/25 05:57:18 AM] [INFO] nic1 mapped to: eth0", "[2018/06/25 05:57:18 AM] [INFO] adding interface: eth0", "[2018/06/25 05:57:18 AM] [INFO] adding custom route for interface: eth0", "[2018/06/25 05:57:18 AM] [INFO] adding bridge: br-isolated", "[2018/06/25 05:57:18 AM] [INFO] adding interface: eth1", "[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan30", "[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan40", "[2018/06/25 05:57:18 AM] [INFO] applying network configs...", "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth1", "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth0", "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/25 05:57:18 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/25 05:57:18 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth1", "[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth0", "[2018/06/25 05:57:23 AM] [INFO] running ifup on interface: vlan30", "[2018/06/25 05:57:27 AM] [INFO] running ifup on interface: vlan40", "[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan30", "[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan40", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-25 05:57:32,342] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/433912bc-b12b-4e77-906a-673041de3f0c", "", "[2018-06-25 05:57:32,346] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:57:32,347] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.json < /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.notify.json", "[2018-06-25 05:57:32,747] (heat-config) [INFO] ", "[2018-06-25 05:57:32,748] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:32,681 p=25239 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-25 05:57:32,788 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:17,969] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.json", > "[2018-06-25 05:57:32,346] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/25 05:57:18 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/25 05:57:18 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/25 05:57:18 AM] [INFO] Not using any mapping file.\\n[2018/06/25 05:57:18 AM] [INFO] Finding active nics\\n[2018/06/25 05:57:18 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/25 05:57:18 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/25 05:57:18 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/25 05:57:18 AM] [INFO] lo is not an active nic\\n[2018/06/25 05:57:18 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/25 05:57:18 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/25 05:57:18 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/25 05:57:18 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/25 05:57:18 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/25 05:57:18 AM] [INFO] adding interface: eth0\\n[2018/06/25 05:57:18 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/25 05:57:18 AM] [INFO] adding bridge: br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] adding interface: eth1\\n[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan30\\n[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan40\\n[2018/06/25 05:57:18 AM] [INFO] applying network configs...\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/25 05:57:18 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/25 05:57:18 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth1\\n[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth0\\n[2018/06/25 05:57:23 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:57:27 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:32,346] (heat-config) [DEBUG] [2018-06-25 05:57:17,990] (heat-config) [INFO] interface_name=nic1", > "[2018-06-25 05:57:17,990] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", > "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-4yxoaen2f3hz-0-uuz6fpm5xioj-NetworkDeployment-l6slsmgwx5eq-TripleOSoftwareDeployment-sgmgriab7nfp/5ec78f02-8ba0-4b88-b970-cc3955f65ea2", > "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:57:17,991] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:57:17,991] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/433912bc-b12b-4e77-906a-673041de3f0c", > "[2018-06-25 05:57:32,342] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-25 05:57:32,342] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/25 05:57:18 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/25 05:57:18 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/25 05:57:18 AM] [INFO] Not using any mapping file.", > "[2018/06/25 05:57:18 AM] [INFO] Finding active nics", > "[2018/06/25 05:57:18 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/25 05:57:18 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/25 05:57:18 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/25 05:57:18 AM] [INFO] lo is not an active nic", > "[2018/06/25 05:57:18 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/25 05:57:18 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/25 05:57:18 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/25 05:57:18 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/25 05:57:18 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/25 05:57:18 AM] [INFO] adding interface: eth0", > "[2018/06/25 05:57:18 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/25 05:57:18 AM] [INFO] adding bridge: br-isolated", > "[2018/06/25 05:57:18 AM] [INFO] adding interface: eth1", > "[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan30", > "[2018/06/25 05:57:18 AM] [INFO] adding vlan: vlan40", > "[2018/06/25 05:57:18 AM] [INFO] applying network configs...", > "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/25 05:57:18 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/25 05:57:18 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/25 05:57:18 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/25 05:57:18 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth1", > "[2018/06/25 05:57:19 AM] [INFO] running ifup on interface: eth0", > "[2018/06/25 05:57:23 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/25 05:57:27 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/25 05:57:31 AM] [INFO] running ifup on interface: vlan40", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-25 05:57:32,342] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/433912bc-b12b-4e77-906a-673041de3f0c", > "", > "[2018-06-25 05:57:32,346] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:57:32,347] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.json < /var/lib/heat-config/deployed/433912bc-b12b-4e77-906a-673041de3f0c.notify.json", > "[2018-06-25 05:57:32,747] (heat-config) [INFO] ", > "[2018-06-25 05:57:32,748] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:32,811 p=25239 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-25 05:57:32,826 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:32,845 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:32,955 p=25239 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "60591350-8a02-43a5-a8b7-7aaa22eff2e8"}, "changed": false} >2018-06-25 05:57:33,015 p=25239 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment] ************* >2018-06-25 05:57:33,627 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "649090668bb1f80cc14ff677e1cc1a6a5f43caf7", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageUpgradeInitDeployment-60591350-8a02-43a5-a8b7-7aaa22eff2e8", "gid": 0, "group": "root", "md5sum": "ae05713817042081b526561678ee6ab6", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1186, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920653.06-58948678049516/source", "state": "file", "uid": 0} >2018-06-25 05:57:33,647 p=25239 u=mistral | TASK [Check if deployed file exists for CephStorageUpgradeInitDeployment] ****** >2018-06-25 05:57:33,972 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:33,993 p=25239 u=mistral | TASK [Check previous deployment rc for CephStorageUpgradeInitDeployment] ******* >2018-06-25 05:57:34,013 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:34,031 p=25239 u=mistral | TASK [Remove deployed file for CephStorageUpgradeInitDeployment when previous deployment failed] *** >2018-06-25 05:57:34,049 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:34,067 p=25239 u=mistral | TASK [Force remove deployed file for CephStorageUpgradeInitDeployment] ********* >2018-06-25 05:57:34,083 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:34,101 p=25239 u=mistral | TASK [Run deployment CephStorageUpgradeInitDeployment] ************************* >2018-06-25 05:57:34,875 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.notify.json)", "delta": "0:00:00.455784", "end": "2018-06-25 05:57:34.980754", "rc": 0, "start": "2018-06-25 05:57:34.524970", "stderr": "[2018-06-25 05:57:34,548] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.json\n[2018-06-25 05:57:34,575] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:34,575] (heat-config) [DEBUG] [2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548\n[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-4yxoaen2f3hz-0-uuz6fpm5xioj-CephStorageUpgradeInitDeployment-k7nplchceyzh/a335f58c-2454-48dc-b3f7-fb069fd9f643\n[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:57:34,569] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/60591350-8a02-43a5-a8b7-7aaa22eff2e8\n[2018-06-25 05:57:34,572] (heat-config) [INFO] \n[2018-06-25 05:57:34,572] (heat-config) [DEBUG] \n[2018-06-25 05:57:34,572] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/60591350-8a02-43a5-a8b7-7aaa22eff2e8\n\n[2018-06-25 05:57:34,575] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:57:34,575] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.json < /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.notify.json\n[2018-06-25 05:57:34,975] (heat-config) [INFO] \n[2018-06-25 05:57:34,975] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:34,548] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.json", "[2018-06-25 05:57:34,575] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:34,575] (heat-config) [DEBUG] [2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", "[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-4yxoaen2f3hz-0-uuz6fpm5xioj-CephStorageUpgradeInitDeployment-k7nplchceyzh/a335f58c-2454-48dc-b3f7-fb069fd9f643", "[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:57:34,569] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/60591350-8a02-43a5-a8b7-7aaa22eff2e8", "[2018-06-25 05:57:34,572] (heat-config) [INFO] ", "[2018-06-25 05:57:34,572] (heat-config) [DEBUG] ", "[2018-06-25 05:57:34,572] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/60591350-8a02-43a5-a8b7-7aaa22eff2e8", "", "[2018-06-25 05:57:34,575] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:57:34,575] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.json < /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.notify.json", "[2018-06-25 05:57:34,975] (heat-config) [INFO] ", "[2018-06-25 05:57:34,975] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:34,894 p=25239 u=mistral | TASK [Output for CephStorageUpgradeInitDeployment] ***************************** >2018-06-25 05:57:34,941 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:34,548] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.json", > "[2018-06-25 05:57:34,575] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:34,575] (heat-config) [DEBUG] [2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", > "[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-4yxoaen2f3hz-0-uuz6fpm5xioj-CephStorageUpgradeInitDeployment-k7nplchceyzh/a335f58c-2454-48dc-b3f7-fb069fd9f643", > "[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:57:34,569] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:57:34,569] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/60591350-8a02-43a5-a8b7-7aaa22eff2e8", > "[2018-06-25 05:57:34,572] (heat-config) [INFO] ", > "[2018-06-25 05:57:34,572] (heat-config) [DEBUG] ", > "[2018-06-25 05:57:34,572] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/60591350-8a02-43a5-a8b7-7aaa22eff2e8", > "", > "[2018-06-25 05:57:34,575] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:57:34,575] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.json < /var/lib/heat-config/deployed/60591350-8a02-43a5-a8b7-7aaa22eff2e8.notify.json", > "[2018-06-25 05:57:34,975] (heat-config) [INFO] ", > "[2018-06-25 05:57:34,975] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:34,960 p=25239 u=mistral | TASK [Check-mode for Run deployment CephStorageUpgradeInitDeployment] ********** >2018-06-25 05:57:34,973 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:34,991 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:35,076 p=25239 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "316e1331-951c-4446-b514-74d43bbba349"}, "changed": false} >2018-06-25 05:57:35,096 p=25239 u=mistral | TASK [Render deployment file for CephStorageDeployment] ************************ >2018-06-25 05:57:35,709 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "5f420f09b43c2c16a73681f29252109ffd861546", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageDeployment-316e1331-951c-4446-b514-74d43bbba349", "gid": 0, "group": "root", "md5sum": "cc31957b0c42855d70e1fcdafd79f8cd", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9062, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920655.18-126480184159198/source", "state": "file", "uid": 0} >2018-06-25 05:57:35,730 p=25239 u=mistral | TASK [Check if deployed file exists for CephStorageDeployment] ***************** >2018-06-25 05:57:36,042 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:36,061 p=25239 u=mistral | TASK [Check previous deployment rc for CephStorageDeployment] ****************** >2018-06-25 05:57:36,079 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:36,097 p=25239 u=mistral | TASK [Remove deployed file for CephStorageDeployment when previous deployment failed] *** >2018-06-25 05:57:36,114 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:36,132 p=25239 u=mistral | TASK [Force remove deployed file for CephStorageDeployment] ******************** >2018-06-25 05:57:36,155 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:36,174 p=25239 u=mistral | TASK [Run deployment CephStorageDeployment] ************************************ >2018-06-25 05:57:37,031 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.notify.json)", "delta": "0:00:00.530741", "end": "2018-06-25 05:57:37.138705", "rc": 0, "start": "2018-06-25 05:57:36.607964", "stderr": "[2018-06-25 05:57:36,633] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.json\n[2018-06-25 05:57:36,750] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:36,750] (heat-config) [DEBUG] \n[2018-06-25 05:57:36,750] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-25 05:57:36,751] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.json < /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.notify.json\n[2018-06-25 05:57:37,133] (heat-config) [INFO] \n[2018-06-25 05:57:37,133] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:36,633] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.json", "[2018-06-25 05:57:36,750] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:36,750] (heat-config) [DEBUG] ", "[2018-06-25 05:57:36,750] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-25 05:57:36,751] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.json < /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.notify.json", "[2018-06-25 05:57:37,133] (heat-config) [INFO] ", "[2018-06-25 05:57:37,133] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:37,052 p=25239 u=mistral | TASK [Output for CephStorageDeployment] **************************************** >2018-06-25 05:57:37,100 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:36,633] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.json", > "[2018-06-25 05:57:36,750] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:36,750] (heat-config) [DEBUG] ", > "[2018-06-25 05:57:36,750] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-25 05:57:36,751] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.json < /var/lib/heat-config/deployed/316e1331-951c-4446-b514-74d43bbba349.notify.json", > "[2018-06-25 05:57:37,133] (heat-config) [INFO] ", > "[2018-06-25 05:57:37,133] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:37,119 p=25239 u=mistral | TASK [Check-mode for Run deployment CephStorageDeployment] ********************* >2018-06-25 05:57:37,134 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:37,152 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:37,207 p=25239 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "ab7cc7ca-12a4-499a-b497-67f81881a06b"}, "changed": false} >2018-06-25 05:57:37,226 p=25239 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment] ******************* >2018-06-25 05:57:37,784 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "1472a06ac04c322a8367273d19d4577d307e6563", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostsDeployment-ab7cc7ca-12a4-499a-b497-67f81881a06b", "gid": 0, "group": "root", "md5sum": "9c3ce01ac80e714f1a3b05aaebe738e4", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4089, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920657.28-140818951713603/source", "state": "file", "uid": 0} >2018-06-25 05:57:37,803 p=25239 u=mistral | TASK [Check if deployed file exists for CephStorageHostsDeployment] ************ >2018-06-25 05:57:38,109 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:38,128 p=25239 u=mistral | TASK [Check previous deployment rc for CephStorageHostsDeployment] ************* >2018-06-25 05:57:38,150 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:38,171 p=25239 u=mistral | TASK [Remove deployed file for CephStorageHostsDeployment when previous deployment failed] *** >2018-06-25 05:57:38,188 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:38,207 p=25239 u=mistral | TASK [Force remove deployed file for CephStorageHostsDeployment] *************** >2018-06-25 05:57:38,223 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:38,242 p=25239 u=mistral | TASK [Run deployment CephStorageHostsDeployment] ******************************* >2018-06-25 05:57:39,040 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.notify.json)", "delta": "0:00:00.457202", "end": "2018-06-25 05:57:39.121626", "rc": 0, "start": "2018-06-25 05:57:38.664424", "stderr": "[2018-06-25 05:57:38,686] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.json\n[2018-06-25 05:57:38,719] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:38,720] (heat-config) [DEBUG] [2018-06-25 05:57:38,705] (heat-config) [INFO] hosts=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548\n[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-utta33ytrryx-0-b46rmi3xgf5e/9db0d486-7fdf-407c-a0aa-38ab66dfae7e\n[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:57:38,706] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ab7cc7ca-12a4-499a-b497-67f81881a06b\n[2018-06-25 05:57:38,716] (heat-config) [INFO] \n[2018-06-25 05:57:38,716] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\n172.17.3.18 overcloud.storage.localdomain\n172.17.4.11 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.12 controller-0.localdomain controller-0\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.106 controller-0.external.localdomain controller-0.external\n192.168.24.14 controller-0.management.localdomain controller-0.management\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.10 compute-0.localdomain compute-0\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-25 05:57:38,716] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ab7cc7ca-12a4-499a-b497-67f81881a06b\n\n[2018-06-25 05:57:38,720] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:57:38,720] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.json < /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.notify.json\n[2018-06-25 05:57:39,115] (heat-config) [INFO] \n[2018-06-25 05:57:39,115] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:38,686] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.json", "[2018-06-25 05:57:38,719] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:38,720] (heat-config) [DEBUG] [2018-06-25 05:57:38,705] (heat-config) [INFO] hosts=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-utta33ytrryx-0-b46rmi3xgf5e/9db0d486-7fdf-407c-a0aa-38ab66dfae7e", "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:57:38,706] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ab7cc7ca-12a4-499a-b497-67f81881a06b", "[2018-06-25 05:57:38,716] (heat-config) [INFO] ", "[2018-06-25 05:57:38,716] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", "172.17.3.18 overcloud.storage.localdomain", "172.17.4.11 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.12 controller-0.localdomain controller-0", "172.17.3.10 controller-0.storage.localdomain controller-0.storage", "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.106 controller-0.external.localdomain controller-0.external", "192.168.24.14 controller-0.management.localdomain controller-0.management", "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.10 compute-0.localdomain compute-0", "172.17.3.16 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-25 05:57:38,716] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ab7cc7ca-12a4-499a-b497-67f81881a06b", "", "[2018-06-25 05:57:38,720] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:57:38,720] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.json < /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.notify.json", "[2018-06-25 05:57:39,115] (heat-config) [INFO] ", "[2018-06-25 05:57:39,115] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:39,067 p=25239 u=mistral | TASK [Output for CephStorageHostsDeployment] *********************************** >2018-06-25 05:57:39,142 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:38,686] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.json", > "[2018-06-25 05:57:38,719] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.12 overcloud.ctlplane.localdomain\\n172.17.3.18 overcloud.storage.localdomain\\n172.17.4.11 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.12 controller-0.localdomain controller-0\\n172.17.3.10 controller-0.storage.localdomain controller-0.storage\\n172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.16 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.106 controller-0.external.localdomain controller-0.external\\n192.168.24.14 controller-0.management.localdomain controller-0.management\\n192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.10 compute-0.localdomain compute-0\\n172.17.3.16 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.12 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:38,720] (heat-config) [DEBUG] [2018-06-25 05:57:38,705] (heat-config) [INFO] hosts=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", > "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-utta33ytrryx-0-b46rmi3xgf5e/9db0d486-7fdf-407c-a0aa-38ab66dfae7e", > "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:57:38,706] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:57:38,706] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ab7cc7ca-12a4-499a-b497-67f81881a06b", > "[2018-06-25 05:57:38,716] (heat-config) [INFO] ", > "[2018-06-25 05:57:38,716] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.12 overcloud.ctlplane.localdomain", > "172.17.3.18 overcloud.storage.localdomain", > "172.17.4.11 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.12 controller-0.localdomain controller-0", > "172.17.3.10 controller-0.storage.localdomain controller-0.storage", > "172.17.4.15 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.12 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.16 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.106 controller-0.external.localdomain controller-0.external", > "192.168.24.14 controller-0.management.localdomain controller-0.management", > "192.168.24.14 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.10 compute-0.localdomain compute-0", > "172.17.3.16 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.10 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.12 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-25 05:57:38,716] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ab7cc7ca-12a4-499a-b497-67f81881a06b", > "", > "[2018-06-25 05:57:38,720] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:57:38,720] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.json < /var/lib/heat-config/deployed/ab7cc7ca-12a4-499a-b497-67f81881a06b.notify.json", > "[2018-06-25 05:57:39,115] (heat-config) [INFO] ", > "[2018-06-25 05:57:39,115] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:39,170 p=25239 u=mistral | TASK [Check-mode for Run deployment CephStorageHostsDeployment] **************** >2018-06-25 05:57:39,187 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:39,206 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:39,339 p=25239 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "5e018875-4275-442f-9f8d-db42adad1598"}, "changed": false} >2018-06-25 05:57:39,358 p=25239 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment] **************** >2018-06-25 05:57:40,059 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "d43f138bd249f1eae1502b565f0cd31d98476edc", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesDeployment-5e018875-4275-442f-9f8d-db42adad1598", "gid": 0, "group": "root", "md5sum": "b915e8cb84b7d18ddec0711ec6b0089f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19026, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920659.51-135252651362869/source", "state": "file", "uid": 0} >2018-06-25 05:57:40,079 p=25239 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesDeployment] ********* >2018-06-25 05:57:40,418 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:40,437 p=25239 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesDeployment] ********** >2018-06-25 05:57:40,455 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:40,475 p=25239 u=mistral | TASK [Remove deployed file for CephStorageAllNodesDeployment when previous deployment failed] *** >2018-06-25 05:57:40,492 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:40,511 p=25239 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesDeployment] ************ >2018-06-25 05:57:40,527 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:40,546 p=25239 u=mistral | TASK [Run deployment CephStorageAllNodesDeployment] **************************** >2018-06-25 05:57:41,435 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.notify.json)", "delta": "0:00:00.554013", "end": "2018-06-25 05:57:41.538948", "rc": 0, "start": "2018-06-25 05:57:40.984935", "stderr": "[2018-06-25 05:57:41,011] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.json\n[2018-06-25 05:57:41,128] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:41,128] (heat-config) [DEBUG] \n[2018-06-25 05:57:41,128] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-25 05:57:41,129] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.json < /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.notify.json\n[2018-06-25 05:57:41,532] (heat-config) [INFO] \n[2018-06-25 05:57:41,532] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:41,011] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.json", "[2018-06-25 05:57:41,128] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:41,128] (heat-config) [DEBUG] ", "[2018-06-25 05:57:41,128] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-25 05:57:41,129] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.json < /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.notify.json", "[2018-06-25 05:57:41,532] (heat-config) [INFO] ", "[2018-06-25 05:57:41,532] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:41,455 p=25239 u=mistral | TASK [Output for CephStorageAllNodesDeployment] ******************************** >2018-06-25 05:57:41,501 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:41,011] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.json", > "[2018-06-25 05:57:41,128] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:41,128] (heat-config) [DEBUG] ", > "[2018-06-25 05:57:41,128] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-25 05:57:41,129] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.json < /var/lib/heat-config/deployed/5e018875-4275-442f-9f8d-db42adad1598.notify.json", > "[2018-06-25 05:57:41,532] (heat-config) [INFO] ", > "[2018-06-25 05:57:41,532] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:41,520 p=25239 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesDeployment] ************* >2018-06-25 05:57:41,534 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:41,553 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:41,607 p=25239 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75"}, "changed": false} >2018-06-25 05:57:41,626 p=25239 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment] ****** >2018-06-25 05:57:42,211 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "7c5c620fe8cec1d525c428ecddd8c6b34abc4898", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesValidationDeployment-a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75", "gid": 0, "group": "root", "md5sum": "e95adffd5638c77b615d85573153c1db", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4943, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920661.68-245919573446942/source", "state": "file", "uid": 0} >2018-06-25 05:57:42,231 p=25239 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesValidationDeployment] *** >2018-06-25 05:57:42,576 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:42,596 p=25239 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesValidationDeployment] *** >2018-06-25 05:57:42,618 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:42,639 p=25239 u=mistral | TASK [Remove deployed file for CephStorageAllNodesValidationDeployment when previous deployment failed] *** >2018-06-25 05:57:42,657 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:42,675 p=25239 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesValidationDeployment] *** >2018-06-25 05:57:42,691 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:42,711 p=25239 u=mistral | TASK [Run deployment CephStorageAllNodesValidationDeployment] ****************** >2018-06-25 05:57:44,064 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.notify.json)", "delta": "0:00:01.024916", "end": "2018-06-25 05:57:44.167645", "rc": 0, "start": "2018-06-25 05:57:43.142729", "stderr": "[2018-06-25 05:57:43,165] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.json\n[2018-06-25 05:57:43,735] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\\nPing to 172.17.3.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.15 for local network 172.17.4.0/24.\\nPing to 172.17.4.15 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\\nPing to 192.168.24.14 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:43,736] (heat-config) [DEBUG] [2018-06-25 05:57:43,187] (heat-config) [INFO] ping_test_ips=172.17.3.10 172.17.4.15 172.17.1.12 172.17.2.16 10.0.0.106 192.168.24.14\n[2018-06-25 05:57:43,188] (heat-config) [INFO] validate_fqdn=False\n[2018-06-25 05:57:43,188] (heat-config) [INFO] validate_ntp=True\n[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548\n[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-lrscaliuqs52-0-xpnbvdgwf3ar/fb9805ed-2338-49cd-b69d-978f51494e75\n[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:57:43,188] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75\n[2018-06-25 05:57:43,731] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\nPing to 10.0.0.106 succeeded.\nSUCCESS\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\nPing to 172.17.3.10 succeeded.\nSUCCESS\nTrying to ping 172.17.4.15 for local network 172.17.4.0/24.\nPing to 172.17.4.15 succeeded.\nSUCCESS\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\nPing to 192.168.24.14 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-06-25 05:57:43,732] (heat-config) [DEBUG] \n[2018-06-25 05:57:43,732] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75\n\n[2018-06-25 05:57:43,736] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:57:43,736] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.json < /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.notify.json\n[2018-06-25 05:57:44,162] (heat-config) [INFO] \n[2018-06-25 05:57:44,162] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:43,165] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.json", "[2018-06-25 05:57:43,735] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\\nPing to 172.17.3.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.15 for local network 172.17.4.0/24.\\nPing to 172.17.4.15 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\\nPing to 192.168.24.14 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:43,736] (heat-config) [DEBUG] [2018-06-25 05:57:43,187] (heat-config) [INFO] ping_test_ips=172.17.3.10 172.17.4.15 172.17.1.12 172.17.2.16 10.0.0.106 192.168.24.14", "[2018-06-25 05:57:43,188] (heat-config) [INFO] validate_fqdn=False", "[2018-06-25 05:57:43,188] (heat-config) [INFO] validate_ntp=True", "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-lrscaliuqs52-0-xpnbvdgwf3ar/fb9805ed-2338-49cd-b69d-978f51494e75", "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:57:43,188] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75", "[2018-06-25 05:57:43,731] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.", "Ping to 10.0.0.106 succeeded.", "SUCCESS", "Trying to ping 172.17.3.10 for local network 172.17.3.0/24.", "Ping to 172.17.3.10 succeeded.", "SUCCESS", "Trying to ping 172.17.4.15 for local network 172.17.4.0/24.", "Ping to 172.17.4.15 succeeded.", "SUCCESS", "Trying to ping 192.168.24.14 for local network 192.168.24.0/24.", "Ping to 192.168.24.14 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-06-25 05:57:43,732] (heat-config) [DEBUG] ", "[2018-06-25 05:57:43,732] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75", "", "[2018-06-25 05:57:43,736] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:57:43,736] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.json < /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.notify.json", "[2018-06-25 05:57:44,162] (heat-config) [INFO] ", "[2018-06-25 05:57:44,162] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:44,083 p=25239 u=mistral | TASK [Output for CephStorageAllNodesValidationDeployment] ********************** >2018-06-25 05:57:44,134 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:43,165] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.json", > "[2018-06-25 05:57:43,735] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.106 for local network 10.0.0.0/24.\\nPing to 10.0.0.106 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.10 for local network 172.17.3.0/24.\\nPing to 172.17.3.10 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.15 for local network 172.17.4.0/24.\\nPing to 172.17.4.15 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.14 for local network 192.168.24.0/24.\\nPing to 192.168.24.14 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:43,736] (heat-config) [DEBUG] [2018-06-25 05:57:43,187] (heat-config) [INFO] ping_test_ips=172.17.3.10 172.17.4.15 172.17.1.12 172.17.2.16 10.0.0.106 192.168.24.14", > "[2018-06-25 05:57:43,188] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-25 05:57:43,188] (heat-config) [INFO] validate_ntp=True", > "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", > "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-lrscaliuqs52-0-xpnbvdgwf3ar/fb9805ed-2338-49cd-b69d-978f51494e75", > "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:57:43,188] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:57:43,188] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75", > "[2018-06-25 05:57:43,731] (heat-config) [INFO] Trying to ping 10.0.0.106 for local network 10.0.0.0/24.", > "Ping to 10.0.0.106 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.10 for local network 172.17.3.0/24.", > "Ping to 172.17.3.10 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.15 for local network 172.17.4.0/24.", > "Ping to 172.17.4.15 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.14 for local network 192.168.24.0/24.", > "Ping to 192.168.24.14 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-06-25 05:57:43,732] (heat-config) [DEBUG] ", > "[2018-06-25 05:57:43,732] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75", > "", > "[2018-06-25 05:57:43,736] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:57:43,736] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.json < /var/lib/heat-config/deployed/a6ad7de8-12cd-4fc8-82d3-d8567e3c5d75.notify.json", > "[2018-06-25 05:57:44,162] (heat-config) [INFO] ", > "[2018-06-25 05:57:44,162] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:44,153 p=25239 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesValidationDeployment] *** >2018-06-25 05:57:44,168 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:44,186 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:44,239 p=25239 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17"}, "changed": false} >2018-06-25 05:57:44,258 p=25239 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy] ******************* >2018-06-25 05:57:44,842 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "020dced8d007f7f901f3281c42e6aa74265ae634", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageArtifactsDeploy-f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17", "gid": 0, "group": "root", "md5sum": "4bc67e266b3a3913e42f3711f73ff22b", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2023, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920664.31-275717207439522/source", "state": "file", "uid": 0} >2018-06-25 05:57:44,863 p=25239 u=mistral | TASK [Check if deployed file exists for CephStorageArtifactsDeploy] ************ >2018-06-25 05:57:45,174 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:45,193 p=25239 u=mistral | TASK [Check previous deployment rc for CephStorageArtifactsDeploy] ************* >2018-06-25 05:57:45,214 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:45,234 p=25239 u=mistral | TASK [Remove deployed file for CephStorageArtifactsDeploy when previous deployment failed] *** >2018-06-25 05:57:45,251 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:45,270 p=25239 u=mistral | TASK [Force remove deployed file for CephStorageArtifactsDeploy] *************** >2018-06-25 05:57:45,287 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:45,305 p=25239 u=mistral | TASK [Run deployment CephStorageArtifactsDeploy] ******************************* >2018-06-25 05:57:46,116 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.notify.json)", "delta": "0:00:00.448231", "end": "2018-06-25 05:57:46.223372", "rc": 0, "start": "2018-06-25 05:57:45.775141", "stderr": "[2018-06-25 05:57:45,797] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.json\n[2018-06-25 05:57:45,824] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:45,824] (heat-config) [DEBUG] [2018-06-25 05:57:45,816] (heat-config) [INFO] artifact_urls=\n[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548\n[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-xkma6mui6qhp-CephStorageArtifactsDeploy-yogdtbighotw-0-zfnqpj5xlq3k/647a7dcd-cb0b-417b-b20d-763022d8f5f1\n[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-25 05:57:45,816] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17\n[2018-06-25 05:57:45,821] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-25 05:57:45,821] (heat-config) [DEBUG] \n[2018-06-25 05:57:45,821] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17\n\n[2018-06-25 05:57:45,824] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-25 05:57:45,824] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.json < /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.notify.json\n[2018-06-25 05:57:46,217] (heat-config) [INFO] \n[2018-06-25 05:57:46,217] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:45,797] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.json", "[2018-06-25 05:57:45,824] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:45,824] (heat-config) [DEBUG] [2018-06-25 05:57:45,816] (heat-config) [INFO] artifact_urls=", "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-xkma6mui6qhp-CephStorageArtifactsDeploy-yogdtbighotw-0-zfnqpj5xlq3k/647a7dcd-cb0b-417b-b20d-763022d8f5f1", "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-25 05:57:45,816] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17", "[2018-06-25 05:57:45,821] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-25 05:57:45,821] (heat-config) [DEBUG] ", "[2018-06-25 05:57:45,821] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17", "", "[2018-06-25 05:57:45,824] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-25 05:57:45,824] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.json < /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.notify.json", "[2018-06-25 05:57:46,217] (heat-config) [INFO] ", "[2018-06-25 05:57:46,217] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:46,136 p=25239 u=mistral | TASK [Output for CephStorageArtifactsDeploy] *********************************** >2018-06-25 05:57:46,242 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:45,797] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.json", > "[2018-06-25 05:57:45,824] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:45,824] (heat-config) [DEBUG] [2018-06-25 05:57:45,816] (heat-config) [INFO] artifact_urls=", > "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_server_id=48f90ddc-458e-4a9f-a1b0-0040aafc9548", > "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-xkma6mui6qhp-CephStorageArtifactsDeploy-yogdtbighotw-0-zfnqpj5xlq3k/647a7dcd-cb0b-417b-b20d-763022d8f5f1", > "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-25 05:57:45,816] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-25 05:57:45,816] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17", > "[2018-06-25 05:57:45,821] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-25 05:57:45,821] (heat-config) [DEBUG] ", > "[2018-06-25 05:57:45,821] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17", > "", > "[2018-06-25 05:57:45,824] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-25 05:57:45,824] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.json < /var/lib/heat-config/deployed/f3cc75eb-5dfc-4df8-a34c-a2b3c018fd17.notify.json", > "[2018-06-25 05:57:46,217] (heat-config) [INFO] ", > "[2018-06-25 05:57:46,217] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:46,262 p=25239 u=mistral | TASK [Check-mode for Run deployment CephStorageArtifactsDeploy] **************** >2018-06-25 05:57:46,277 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:46,294 p=25239 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-25 05:57:46,409 p=25239 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "a88de79e-cdfa-4333-80c3-62877e349ede"}, "changed": false} >2018-06-25 05:57:46,471 p=25239 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment] **************** >2018-06-25 05:57:47,062 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "7241345bb7fb452042a71f64ff89587b445207b1", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostPrepDeployment-a88de79e-cdfa-4333-80c3-62877e349ede", "gid": 0, "group": "root", "md5sum": "a8c11bcd9f3038f193a457d4a299d1ce", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19872, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920666.54-206058366558078/source", "state": "file", "uid": 0} >2018-06-25 05:57:47,080 p=25239 u=mistral | TASK [Check if deployed file exists for CephStorageHostPrepDeployment] ********* >2018-06-25 05:57:47,392 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 05:57:47,413 p=25239 u=mistral | TASK [Check previous deployment rc for CephStorageHostPrepDeployment] ********** >2018-06-25 05:57:47,431 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:47,450 p=25239 u=mistral | TASK [Remove deployed file for CephStorageHostPrepDeployment when previous deployment failed] *** >2018-06-25 05:57:47,467 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:47,486 p=25239 u=mistral | TASK [Force remove deployed file for CephStorageHostPrepDeployment] ************ >2018-06-25 05:57:47,504 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:47,524 p=25239 u=mistral | TASK [Run deployment CephStorageHostPrepDeployment] **************************** >2018-06-25 05:57:52,670 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.notify.json)", "delta": "0:00:04.819680", "end": "2018-06-25 05:57:52.772466", "rc": 0, "start": "2018-06-25 05:57:47.952786", "stderr": "[2018-06-25 05:57:47,975] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.json\n[2018-06-25 05:57:52,360] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-25 05:57:52,360] (heat-config) [DEBUG] [2018-06-25 05:57:47,999] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a88de79e-cdfa-4333-80c3-62877e349ede_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a88de79e-cdfa-4333-80c3-62877e349ede_variables.json\n[2018-06-25 05:57:52,356] (heat-config) [INFO] Return code 0\n[2018-06-25 05:57:52,356] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-06-25 05:57:52,356] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a88de79e-cdfa-4333-80c3-62877e349ede_playbook.yaml\n\n[2018-06-25 05:57:52,360] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-25 05:57:52,360] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.json < /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.notify.json\n[2018-06-25 05:57:52,767] (heat-config) [INFO] \n[2018-06-25 05:57:52,767] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-25 05:57:47,975] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.json", "[2018-06-25 05:57:52,360] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-25 05:57:52,360] (heat-config) [DEBUG] [2018-06-25 05:57:47,999] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a88de79e-cdfa-4333-80c3-62877e349ede_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a88de79e-cdfa-4333-80c3-62877e349ede_variables.json", "[2018-06-25 05:57:52,356] (heat-config) [INFO] Return code 0", "[2018-06-25 05:57:52,356] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-06-25 05:57:52,356] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a88de79e-cdfa-4333-80c3-62877e349ede_playbook.yaml", "", "[2018-06-25 05:57:52,360] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-25 05:57:52,360] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.json < /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.notify.json", "[2018-06-25 05:57:52,767] (heat-config) [INFO] ", "[2018-06-25 05:57:52,767] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-25 05:57:52,689 p=25239 u=mistral | TASK [Output for CephStorageHostPrepDeployment] ******************************** >2018-06-25 05:57:52,736 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-25 05:57:47,975] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.json", > "[2018-06-25 05:57:52,360] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-25 05:57:52,360] (heat-config) [DEBUG] [2018-06-25 05:57:47,999] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a88de79e-cdfa-4333-80c3-62877e349ede_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a88de79e-cdfa-4333-80c3-62877e349ede_variables.json", > "[2018-06-25 05:57:52,356] (heat-config) [INFO] Return code 0", > "[2018-06-25 05:57:52,356] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-06-25 05:57:52,356] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a88de79e-cdfa-4333-80c3-62877e349ede_playbook.yaml", > "", > "[2018-06-25 05:57:52,360] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-25 05:57:52,360] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.json < /var/lib/heat-config/deployed/a88de79e-cdfa-4333-80c3-62877e349ede.notify.json", > "[2018-06-25 05:57:52,767] (heat-config) [INFO] ", > "[2018-06-25 05:57:52,767] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-25 05:57:52,754 p=25239 u=mistral | TASK [Check-mode for Run deployment CephStorageHostPrepDeployment] ************* >2018-06-25 05:57:52,768 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:52,774 p=25239 u=mistral | PLAY [Host prep steps] ********************************************************* >2018-06-25 05:57:52,808 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:57:52,856 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:52,857 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:52,872 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:52,881 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:53,184 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/aodh) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/aodh", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:53,498 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/aodh-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/aodh-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:53,520 p=25239 u=mistral | TASK [aodh logs readme] ******************************************************** >2018-06-25 05:57:53,572 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:53,585 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:54,159 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b6cf6dbe054f430c33d39c1a1a88593536d6e659", "msg": "Destination directory /var/log/aodh does not exist"} >2018-06-25 05:57:54,159 p=25239 u=mistral | ...ignoring >2018-06-25 05:57:54,181 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:57:54,230 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:54,248 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:54,551 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:54,572 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:57:54,626 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:54,641 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:54,919 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:54,942 p=25239 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-06-25 05:57:54,991 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:55,005 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:55,621 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-06-25 05:57:55,621 p=25239 u=mistral | ...ignoring >2018-06-25 05:57:55,641 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:57:55,697 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:55,698 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:55,711 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:55,720 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:56,038 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:56,385 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/cinder-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/cinder-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:56,409 p=25239 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-06-25 05:57:56,468 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:56,485 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:57,109 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292", "msg": "Destination directory /var/log/cinder does not exist"} >2018-06-25 05:57:57,109 p=25239 u=mistral | ...ignoring >2018-06-25 05:57:57,131 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:57:57,184 p=25239 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:57,185 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:57,198 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:57,203 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:57,527 p=25239 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:57,871 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:57,898 p=25239 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-25 05:57:57,954 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:57,972 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:58,281 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:58,302 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:57:58,352 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:58,370 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:58,657 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:58,680 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:57:58,731 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:58,732 p=25239 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:58,749 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:58,754 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,033 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:59,367 p=25239 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:57:59,389 p=25239 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-06-25 05:57:59,439 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"cinder_enable_iscsi_backend": false}, "changed": false} >2018-06-25 05:57:59,440 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,452 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,473 p=25239 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-06-25 05:57:59,498 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,520 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,530 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,550 p=25239 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-06-25 05:57:59,576 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,597 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,613 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,632 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:57:59,681 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-06-25 05:57:59,697 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:00,046 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/glance) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/glance", "mode": "0755", "owner": "root", "path": "/var/log/containers/glance", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:00,070 p=25239 u=mistral | TASK [glance logs readme] ****************************************************** >2018-06-25 05:58:00,124 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:00,136 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:00,747 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "e368ae3272baeb19e1113009ea5dae00e797c919", "msg": "Destination directory /var/log/glance does not exist"} >2018-06-25 05:58:00,748 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:00,771 p=25239 u=mistral | TASK [set_fact] **************************************************************** >2018-06-25 05:58:00,835 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:00,860 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:00,871 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:00,892 p=25239 u=mistral | TASK [file] ******************************************************************** >2018-06-25 05:58:00,918 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:00,940 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:00,954 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:00,974 p=25239 u=mistral | TASK [stat] ******************************************************************** >2018-06-25 05:58:01,000 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,022 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,035 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,057 p=25239 u=mistral | TASK [copy] ******************************************************************** >2018-06-25 05:58:01,086 p=25239 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,115 p=25239 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,128 p=25239 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,150 p=25239 u=mistral | TASK [mount] ******************************************************************* >2018-06-25 05:58:01,181 p=25239 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,208 p=25239 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,220 p=25239 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,243 p=25239 u=mistral | TASK [Mount Node Staging Location] ********************************************* >2018-06-25 05:58:01,271 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,295 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,307 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,329 p=25239 u=mistral | TASK [Mount NFS on host] ******************************************************* >2018-06-25 05:58:01,356 p=25239 u=mistral | skipping: [controller-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,383 p=25239 u=mistral | skipping: [compute-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,402 p=25239 u=mistral | skipping: [ceph-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,427 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:01,483 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,484 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,499 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,504 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:01,800 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/gnocchi", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:02,119 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/gnocchi-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/gnocchi-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:02,143 p=25239 u=mistral | TASK [gnocchi logs readme] ***************************************************** >2018-06-25 05:58:02,196 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:02,208 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:02,794 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "2f6114e0f135d7222e70a07579ab0b2b6f967ff8", "msg": "Destination directory /var/log/gnocchi does not exist"} >2018-06-25 05:58:02,794 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:02,817 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:02,875 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:02,891 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,213 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:03,237 p=25239 u=mistral | TASK [get parameters] ********************************************************** >2018-06-25 05:58:03,291 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:58:03,292 p=25239 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:58:03,303 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:58:03,324 p=25239 u=mistral | TASK [get DeployedSSLCertificatePath attributes] ******************************* >2018-06-25 05:58:03,349 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,375 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,387 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,408 p=25239 u=mistral | TASK [Assign bootstrap node] *************************************************** >2018-06-25 05:58:03,435 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,457 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,468 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,488 p=25239 u=mistral | TASK [set is_bootstrap_node fact] ********************************************** >2018-06-25 05:58:03,514 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,536 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,547 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,568 p=25239 u=mistral | TASK [get haproxy status] ****************************************************** >2018-06-25 05:58:03,594 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,616 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,633 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,656 p=25239 u=mistral | TASK [get pacemaker status] **************************************************** >2018-06-25 05:58:03,684 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,706 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,719 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,739 p=25239 u=mistral | TASK [get docker status] ******************************************************* >2018-06-25 05:58:03,766 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,789 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,800 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,820 p=25239 u=mistral | TASK [get container_id] ******************************************************** >2018-06-25 05:58:03,848 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,871 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,883 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,903 p=25239 u=mistral | TASK [get pcs resource name for haproxy container] ***************************** >2018-06-25 05:58:03,932 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,957 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,969 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:03,989 p=25239 u=mistral | TASK [remove DeployedSSLCertificatePath if is dir] ***************************** >2018-06-25 05:58:04,015 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,037 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,051 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,071 p=25239 u=mistral | TASK [push certificate content] ************************************************ >2018-06-25 05:58:04,097 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:58:04,121 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:58:04,132 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:58:04,152 p=25239 u=mistral | TASK [set certificate ownership] *********************************************** >2018-06-25 05:58:04,178 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,204 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,217 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,238 p=25239 u=mistral | TASK [reload haproxy if enabled] *********************************************** >2018-06-25 05:58:04,265 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,288 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,301 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,324 p=25239 u=mistral | TASK [restart pacemaker resource for haproxy] ********************************** >2018-06-25 05:58:04,353 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,376 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,388 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,408 p=25239 u=mistral | TASK [set kolla_dir fact] ****************************************************** >2018-06-25 05:58:04,435 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,457 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,472 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,495 p=25239 u=mistral | TASK [set certificate group on host via container] ***************************** >2018-06-25 05:58:04,521 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,542 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,553 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,574 p=25239 u=mistral | TASK [copy certificate from kolla directory to final location] ***************** >2018-06-25 05:58:04,598 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,622 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,634 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,654 p=25239 u=mistral | TASK [send restart order to haproxy container] ********************************* >2018-06-25 05:58:04,680 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,701 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,712 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,733 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:58:04,784 p=25239 u=mistral | skipping: [compute-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:04,801 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:05,093 p=25239 u=mistral | ok: [controller-0] => (item=/var/lib/haproxy) => {"changed": false, "gid": 188, "group": "haproxy", "item": "/var/lib/haproxy", "mode": "0755", "owner": "haproxy", "path": "/var/lib/haproxy", "secontext": "system_u:object_r:haproxy_var_lib_t:s0", "size": 6, "state": "directory", "uid": 188} >2018-06-25 05:58:05,116 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:05,166 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:05,167 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:05,181 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:05,188 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:05,473 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:05,788 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:05,811 p=25239 u=mistral | TASK [heat logs readme] ******************************************************** >2018-06-25 05:58:05,863 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:05,874 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:06,483 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "d30ca3bda176434d31659e7379616dd162ddb246", "msg": "Destination directory /var/log/heat does not exist"} >2018-06-25 05:58:06,483 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:06,506 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:06,560 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:06,560 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:06,574 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:06,579 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:06,908 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:07,216 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api-cfn", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api-cfn", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:07,241 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:07,331 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:07,345 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:07,626 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:07,648 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:07,698 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:07,699 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:07,713 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:07,717 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:08,006 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:08,329 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:08,351 p=25239 u=mistral | TASK [horizon logs readme] ***************************************************** >2018-06-25 05:58:08,402 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:08,414 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:08,984 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ac324739761cb36b925d6e309482e26f7fe49b91", "msg": "Destination directory /var/log/horizon does not exist"} >2018-06-25 05:58:08,984 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:09,006 p=25239 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-25 05:58:09,058 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:09,071 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:09,367 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1529920573.2878382, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1529433183.0936344, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 5335882, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "18446744072695807771", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-25 05:58:09,389 p=25239 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-25 05:58:09,444 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:09,456 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:09,876 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "sockets.target iscsid.service shutdown.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127793", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-25 05:58:09,899 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:09,950 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:09,951 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:09,966 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:09,971 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:10,272 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:10,590 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:10,614 p=25239 u=mistral | TASK [keystone logs readme] **************************************************** >2018-06-25 05:58:10,670 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:10,685 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:11,243 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "910be882addb6df99267e9bd303f6d9bf658562e", "msg": "Destination directory /var/log/keystone does not exist"} >2018-06-25 05:58:11,243 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:11,266 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:11,320 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:11,334 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:11,611 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/memcached", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:11,635 p=25239 u=mistral | TASK [memcached logs readme] *************************************************** >2018-06-25 05:58:11,688 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:11,706 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:12,225 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "f72ee86fbe604c83734785fe970323e58e3fad9e", "dest": "/var/log/memcached-readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/memcached-readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 86, "state": "file", "uid": 0} >2018-06-25 05:58:12,249 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:58:12,305 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:12,306 p=25239 u=mistral | skipping: [compute-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:12,322 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:12,326 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:12,635 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/mysql) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/mysql", "mode": "0755", "owner": "root", "path": "/var/log/containers/mysql", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:12,976 p=25239 u=mistral | ok: [controller-0] => (item=/var/lib/mysql) => {"changed": false, "gid": 27, "group": "mysql", "item": "/var/lib/mysql", "mode": "0755", "owner": "mysql", "path": "/var/lib/mysql", "secontext": "system_u:object_r:mysqld_db_t:s0", "size": 6, "state": "directory", "uid": 27} >2018-06-25 05:58:13,001 p=25239 u=mistral | TASK [mysql logs readme] ******************************************************* >2018-06-25 05:58:13,059 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:13,075 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:13,610 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "de8fb5fe96200ab286121f8a09419702bd693743", "dest": "/var/log/mariadb/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/mariadb/readme.txt", "secontext": "system_u:object_r:mysqld_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-06-25 05:58:13,633 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:13,689 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:13,690 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:13,706 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:13,712 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:14,016 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:14,351 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/neutron-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/neutron-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:14,375 p=25239 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-06-25 05:58:14,429 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:14,443 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:15,015 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-06-25 05:58:15,015 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:15,040 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:15,095 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:15,112 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:15,398 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:15,422 p=25239 u=mistral | TASK [create /var/lib/neutron] ************************************************* >2018-06-25 05:58:15,474 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:15,488 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:15,779 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/neutron", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:15,802 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:15,855 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:15,856 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:15,874 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:15,879 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:16,153 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:16,494 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:16,518 p=25239 u=mistral | TASK [nova logs readme] ******************************************************** >2018-06-25 05:58:16,572 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:16,587 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:17,117 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-06-25 05:58:17,117 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:17,140 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:17,193 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:17,207 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:17,478 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:17,501 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:17,555 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:17,556 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:17,572 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:17,576 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:17,845 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:18,140 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-placement", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-placement", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:18,166 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:18,223 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:18,224 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:18,244 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:18,245 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:18,519 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/panko) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/panko", "mode": "0755", "owner": "root", "path": "/var/log/containers/panko", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:18,826 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/panko-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/panko-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:18,851 p=25239 u=mistral | TASK [panko logs readme] ******************************************************* >2018-06-25 05:58:18,905 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:18,918 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:19,463 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "903397bbd82e9b1f53087e3d7e8975d851857ce2", "msg": "Destination directory /var/log/panko does not exist"} >2018-06-25 05:58:19,463 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:19,488 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:58:19,542 p=25239 u=mistral | skipping: [compute-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:19,543 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:19,559 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:19,565 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:19,883 p=25239 u=mistral | ok: [controller-0] => (item=/var/lib/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/lib/rabbitmq", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:20,204 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/log/containers/rabbitmq", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:20,229 p=25239 u=mistral | TASK [rabbitmq logs readme] **************************************************** >2018-06-25 05:58:20,285 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:20,298 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:20,885 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ee241f2199f264c9d0f384cf389fe255e8bf8a77", "msg": "Destination directory /var/log/rabbitmq does not exist"} >2018-06-25 05:58:20,886 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:20,943 p=25239 u=mistral | TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] *** >2018-06-25 05:58:21,001 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:21,014 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:21,313 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf\n echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done", "delta": "0:00:00.031072", "end": "2018-06-25 05:58:21.409449", "rc": 0, "start": "2018-06-25 05:58:21.378377", "stderr": "/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory\n/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "stderr_lines": ["/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory"], "stdout": "", "stdout_lines": []} >2018-06-25 05:58:21,336 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:58:21,389 p=25239 u=mistral | skipping: [compute-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:21,394 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:21,395 p=25239 u=mistral | skipping: [compute-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:21,406 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:21,410 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:21,415 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:21,681 p=25239 u=mistral | ok: [controller-0] => (item=/var/lib/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/lib/redis", "mode": "0750", "owner": "redis", "path": "/var/lib/redis", "secontext": "system_u:object_r:redis_var_lib_t:s0", "size": 6, "state": "directory", "uid": 992} >2018-06-25 05:58:21,976 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers/redis) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/redis", "mode": "0755", "owner": "root", "path": "/var/log/containers/redis", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:22,275 p=25239 u=mistral | ok: [controller-0] => (item=/var/run/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/run/redis", "mode": "0755", "owner": "redis", "path": "/var/run/redis", "secontext": "system_u:object_r:redis_var_run_t:s0", "size": 40, "state": "directory", "uid": 992} >2018-06-25 05:58:22,300 p=25239 u=mistral | TASK [redis logs readme] ******************************************************* >2018-06-25 05:58:22,354 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:22,369 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:22,852 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42d03af8abf93e87fdb3fc69702638fc81d943fb", "dest": "/var/log/redis/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/redis/readme.txt", "secontext": "system_u:object_r:redis_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-06-25 05:58:22,876 p=25239 u=mistral | TASK [create /var/lib/sahara] ************************************************** >2018-06-25 05:58:22,930 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:22,944 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:23,237 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/sahara", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:23,264 p=25239 u=mistral | TASK [create persistent sahara logs directory] ********************************* >2018-06-25 05:58:23,318 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:23,332 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:23,605 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/sahara", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:23,628 p=25239 u=mistral | TASK [sahara logs readme] ****************************************************** >2018-06-25 05:58:23,683 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:23,697 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:24,244 p=25239 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b0212a1177fa4a88502d17a1cbc31198040cf047", "msg": "Destination directory /var/log/sahara does not exist"} >2018-06-25 05:58:24,244 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:24,267 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:58:24,323 p=25239 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:24,324 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:24,341 p=25239 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:24,345 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:24,612 p=25239 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-06-25 05:58:24,923 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-06-25 05:58:24,947 p=25239 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-06-25 05:58:25,001 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:25,015 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:25,311 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "dest": "/var/log/containers/swift", "gid": 0, "group": "root", "mode": "0777", "owner": "root", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 14, "src": "/var/log/swift", "state": "link", "uid": 0} >2018-06-25 05:58:25,334 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:58:25,393 p=25239 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:25,394 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:25,395 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:25,406 p=25239 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:25,412 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:25,416 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:25,681 p=25239 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-06-25 05:58:25,984 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-06-25 05:58:26,281 p=25239 u=mistral | ok: [controller-0] => (item=/var/log/containers) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers", "mode": "0755", "owner": "root", "path": "/var/log/containers", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 261, "state": "directory", "uid": 0} >2018-06-25 05:58:26,306 p=25239 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-06-25 05:58:26,358 p=25239 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_use_local_disks": true}, "changed": false} >2018-06-25 05:58:26,359 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:26,370 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:26,392 p=25239 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-06-25 05:58:26,444 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:26,462 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:26,731 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/srv/node/d1", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:26,754 p=25239 u=mistral | TASK [swift logs readme] ******************************************************* >2018-06-25 05:58:26,809 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:26,823 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:27,301 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42510a6de124722d6efbc2b1bb038bfe97e5b6d3", "dest": "/var/log/swift/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/swift/readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 116, "state": "file", "uid": 0} >2018-06-25 05:58:27,324 p=25239 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-06-25 05:58:27,407 p=25239 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-06-25 05:58:27,496 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:27,527 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:27,569 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:27,896 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:27,920 p=25239 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-06-25 05:58:27,950 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:27,988 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:28,574 p=25239 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-06-25 05:58:28,574 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:28,597 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:28,625 p=25239 u=mistral | skipping: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:28,664 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:28,967 p=25239 u=mistral | ok: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:28,989 p=25239 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-06-25 05:58:29,016 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:29,053 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:29,649 p=25239 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-06-25 05:58:29,650 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:29,674 p=25239 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-25 05:58:29,728 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:29,740 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:30,073 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"atime": 1529920628.064508, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1529433183.0936344, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 5335882, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "18446744072695807771", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-25 05:58:30,096 p=25239 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-25 05:58:30,124 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:30,165 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:30,495 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "sockets.target iscsid.service shutdown.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-25 05:58:30,518 p=25239 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-25 05:58:30,544 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:30,585 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:30,900 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:30,924 p=25239 u=mistral | TASK [nova logs readme] ******************************************************** >2018-06-25 05:58:30,951 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:30,994 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:31,579 p=25239 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-06-25 05:58:31,579 p=25239 u=mistral | ...ignoring >2018-06-25 05:58:31,603 p=25239 u=mistral | TASK [Mount Nova NFS Share] **************************************************** >2018-06-25 05:58:31,632 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:31,656 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:31,673 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:31,696 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:58:31,724 p=25239 u=mistral | skipping: [controller-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:31,725 p=25239 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:31,771 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:31,771 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:32,128 p=25239 u=mistral | ok: [compute-0] => (item=/var/lib/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/nova", "mode": "0755", "owner": "root", "path": "/var/lib/nova", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:32,457 p=25239 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-06-25 05:58:32,481 p=25239 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-25 05:58:32,513 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:32,553 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:32,916 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:32,943 p=25239 u=mistral | TASK [is Instance HA enabled] ************************************************** >2018-06-25 05:58:33,012 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,053 p=25239 u=mistral | ok: [compute-0] => {"ansible_facts": {"instance_ha_enabled": false}, "changed": false} >2018-06-25 05:58:33,057 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,080 p=25239 u=mistral | TASK [prepare Instance HA script directory] ************************************ >2018-06-25 05:58:33,111 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,136 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,150 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,171 p=25239 u=mistral | TASK [install Instance HA script that runs nova-compute] *********************** >2018-06-25 05:58:33,198 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,223 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,234 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,256 p=25239 u=mistral | TASK [Get list of instance HA compute nodes] *********************************** >2018-06-25 05:58:33,282 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,308 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,321 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,341 p=25239 u=mistral | TASK [If instance HA is enabled on the node activate the evacuation completed check] *** >2018-06-25 05:58:33,367 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,390 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,402 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,422 p=25239 u=mistral | TASK [create libvirt persistent data directories] ****************************** >2018-06-25 05:58:33,448 p=25239 u=mistral | skipping: [controller-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,473 p=25239 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,474 p=25239 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,475 p=25239 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,475 p=25239 u=mistral | skipping: [controller-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,501 p=25239 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,502 p=25239 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,502 p=25239 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,506 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,511 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:33,816 p=25239 u=mistral | ok: [compute-0] => (item=/etc/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt", "mode": "0700", "owner": "root", "path": "/etc/libvirt", "secontext": "system_u:object_r:virt_etc_t:s0", "size": 215, "state": "directory", "uid": 0} >2018-06-25 05:58:34,147 p=25239 u=mistral | ok: [compute-0] => (item=/etc/libvirt/secrets) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/secrets", "mode": "0700", "owner": "root", "path": "/etc/libvirt/secrets", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:34,480 p=25239 u=mistral | ok: [compute-0] => (item=/etc/libvirt/qemu) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/qemu", "mode": "0700", "owner": "root", "path": "/etc/libvirt/qemu", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 22, "state": "directory", "uid": 0} >2018-06-25 05:58:34,809 p=25239 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-06-25 05:58:35,130 p=25239 u=mistral | ok: [compute-0] => (item=/var/log/containers/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/libvirt", "mode": "0755", "owner": "root", "path": "/var/log/containers/libvirt", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:35,153 p=25239 u=mistral | TASK [ensure qemu group is present on the host] ******************************** >2018-06-25 05:58:35,181 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:35,218 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:35,631 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "name": "qemu", "state": "present", "system": false} >2018-06-25 05:58:35,652 p=25239 u=mistral | TASK [ensure qemu user is present on the host] ********************************* >2018-06-25 05:58:35,677 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:35,716 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:36,190 p=25239 u=mistral | ok: [compute-0] => {"append": false, "changed": false, "comment": "qemu user", "group": 107, "home": "/", "move_home": false, "name": "qemu", "shell": "/sbin/nologin", "state": "present", "uid": 107} >2018-06-25 05:58:36,211 p=25239 u=mistral | TASK [create directory for vhost-user sockets with qemu ownership] ************* >2018-06-25 05:58:36,240 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:36,276 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:36,588 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "group": "qemu", "mode": "0755", "owner": "qemu", "path": "/var/lib/vhost_sockets", "secontext": "system_u:object_r:virt_cache_t:s0", "size": 6, "state": "directory", "uid": 107} >2018-06-25 05:58:36,609 p=25239 u=mistral | TASK [check if libvirt is installed] ******************************************* >2018-06-25 05:58:36,636 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:36,671 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,027 p=25239 u=mistral | [WARNING]: Consider using the yum, dnf or zypper module rather than running >rpm. If you need to use command because yum, dnf or zypper is insufficient you >can add warn=False to this command task or set command_warnings=False in >ansible.cfg to get rid of this message. > >2018-06-25 05:58:37,027 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/usr/bin/rpm", "-q", "libvirt-daemon"], "delta": "0:00:00.036019", "end": "2018-06-25 05:58:37.122497", "failed_when_result": false, "rc": 0, "start": "2018-06-25 05:58:37.086478", "stderr": "", "stderr_lines": [], "stdout": "libvirt-daemon-3.9.0-14.el7_5.5.x86_64", "stdout_lines": ["libvirt-daemon-3.9.0-14.el7_5.5.x86_64"]} >2018-06-25 05:58:37,050 p=25239 u=mistral | TASK [make sure libvirt services are disabled] ********************************* >2018-06-25 05:58:37,080 p=25239 u=mistral | skipping: [controller-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,082 p=25239 u=mistral | skipping: [controller-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,124 p=25239 u=mistral | skipping: [ceph-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,128 p=25239 u=mistral | skipping: [ceph-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,454 p=25239 u=mistral | ok: [compute-0] => (item=libvirtd.service) => {"changed": false, "enabled": false, "item": "libvirtd.service", "name": "libvirtd.service", "state": "stopped", "status": {"ActiveEnterTimestamp": "Mon 2018-06-25 05:51:28 EDT", "ActiveEnterTimestampMonotonic": "4801426", "ActiveExitTimestamp": "Mon 2018-06-25 05:57:12 EDT", "ActiveExitTimestampMonotonic": "348896762", "ActiveState": "inactive", "After": "virtlockd.socket system.slice iscsid.service apparmor.service remote-fs.target virtlogd.socket dbus.service systemd-journald.socket local-fs.target network.target virtlockd.service basic.target virtlogd.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-06-25 05:51:27 EDT", "AssertTimestampMonotonic": "4613397", "Before": "libvirt-guests.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-06-25 05:51:27 EDT", "ConditionTimestampMonotonic": "4613397", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Virtualization daemon", "DevicePolicy": "auto", "Documentation": "man:libvirtd(8) https://libvirt.org", "EnvironmentFile": "/etc/sysconfig/libvirtd (ignore_errors=yes)", "ExecMainCode": "1", "ExecMainExitTimestamp": "Mon 2018-06-25 05:57:12 EDT", "ExecMainExitTimestampMonotonic": "348913028", "ExecMainPID": "1160", "ExecMainStartTimestamp": "Mon 2018-06-25 05:51:27 EDT", "ExecMainStartTimestampMonotonic": "4614750", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/libvirtd ; argv[]=/usr/sbin/libvirtd $LIBVIRTD_ARGS ; ignore_errors=no ; start_time=[Mon 2018-06-25 05:51:27 EDT] ; stop_time=[Mon 2018-06-25 05:57:12 EDT] ; pid=1160 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/libvirtd.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "libvirtd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Mon 2018-06-25 05:57:12 EDT", "InactiveEnterTimestampMonotonic": "348913127", "InactiveExitTimestamp": "Mon 2018-06-25 05:51:27 EDT", "InactiveExitTimestampMonotonic": "4614786", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "8192", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "libvirtd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "virtlockd.socket basic.target virtlogd.socket", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "32768", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "WantedBy": "libvirt-guests.service", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-25 05:58:37,788 p=25239 u=mistral | ok: [compute-0] => (item=virtlogd.socket) => {"changed": false, "enabled": false, "item": "virtlogd.socket", "name": "virtlogd.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Mon 2018-06-25 05:51:26 EDT", "ActiveEnterTimestampMonotonic": "3138300", "ActiveExitTimestamp": "Mon 2018-06-25 05:57:12 EDT", "ActiveExitTimestampMonotonic": "349101525", "ActiveState": "inactive", "After": "sysinit.target -.slice -.mount", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-06-25 05:51:26 EDT", "AssertTimestampMonotonic": "3137646", "Backlog": "128", "Before": "libvirtd.service sockets.target virtlogd.service shutdown.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-06-25 05:51:26 EDT", "ConditionTimestampMonotonic": "3137646", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Virtual machine log manager socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "FragmentPath": "/usr/lib/systemd/system/virtlogd.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "virtlogd.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Mon 2018-06-25 05:57:12 EDT", "InactiveEnterTimestampMonotonic": "349101525", "InactiveExitTimestamp": "Mon 2018-06-25 05:51:26 EDT", "InactiveExitTimestampMonotonic": "3138300", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "ListenStream": "/var/run/libvirt/virtlogd-sock", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "virtlogd.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "RequiredBy": "libvirtd.service virtlogd.service", "Requires": "sysinit.target -.mount", "RequiresMountsFor": "/var/run/libvirt/virtlogd-sock", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "virtlogd.service", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-25 05:58:37,812 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:58:37,843 p=25239 u=mistral | skipping: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,844 p=25239 u=mistral | skipping: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,870 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,871 p=25239 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,884 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,893 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,915 p=25239 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-06-25 05:58:37,945 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,968 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:37,980 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,000 p=25239 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-25 05:58:38,024 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,047 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,057 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,077 p=25239 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-06-25 05:58:38,102 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,123 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,136 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,157 p=25239 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-06-25 05:58:38,188 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,211 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,224 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,246 p=25239 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-06-25 05:58:38,274 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,300 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,313 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,335 p=25239 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-25 05:58:38,362 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,386 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,398 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,420 p=25239 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-25 05:58:38,448 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,475 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,488 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,510 p=25239 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-25 05:58:38,539 p=25239 u=mistral | skipping: [controller-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,541 p=25239 u=mistral | skipping: [controller-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,541 p=25239 u=mistral | skipping: [controller-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,568 p=25239 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,570 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,570 p=25239 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,581 p=25239 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,586 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,590 p=25239 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,611 p=25239 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-06-25 05:58:38,638 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,663 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,674 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,694 p=25239 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-06-25 05:58:38,721 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,743 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,761 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,785 p=25239 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-06-25 05:58:38,813 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,840 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,852 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,874 p=25239 u=mistral | TASK [swift logs readme] ******************************************************* >2018-06-25 05:58:38,901 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,925 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,937 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:38,959 p=25239 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-06-25 05:58:39,040 p=25239 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-06-25 05:58:39,104 p=25239 u=mistral | PLAY [External deployment step 1] ********************************************** >2018-06-25 05:58:39,123 p=25239 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-25 05:58:39,152 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"blacklisted_hostnames": []}, "changed": false} >2018-06-25 05:58:39,169 p=25239 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-25 05:58:39,375 p=25239 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-25 05:58:39,536 p=25239 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-25 05:58:39,701 p=25239 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-25 05:58:39,721 p=25239 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-25 05:58:40,311 p=25239 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "9f53f97f5fe6641c181bd2b2662ffd2a1df3ee68", "dest": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/inventory.yml", "gid": 985, "group": "mistral", "md5sum": "cc9c2345087728f861791109c39b375c", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 527, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529920720.0-140421959867079/source", "state": "file", "uid": 988} >2018-06-25 05:58:40,328 p=25239 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-25 05:58:40,364 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_all": {"ceph_conf_overrides": {"global": {"osd_pool_default_pg_num": 32, "osd_pool_default_pgp_num": 32, "osd_pool_default_size": 1, "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_admin_domain": "default", "rgw_keystone_admin_password": "y3VecJyRrRbxrdVnJj3RXwPZX", "rgw_keystone_admin_project": "service", "rgw_keystone_admin_user": "swift", "rgw_keystone_api_version": 3, "rgw_keystone_implicit_tenants": "true", "rgw_keystone_revocation_interval": "0", "rgw_keystone_url": "http://172.17.1.15:5000", "rgw_s3_auth_use_keystone": "true"}}, "ceph_docker_image": "rhceph", "ceph_docker_image_tag": "3-6", "ceph_docker_registry": "192.168.24.1:8787", "ceph_origin": "distro", "ceph_stable": true, "cluster": "ceph", "cluster_network": "172.17.4.0/24", "containerized_deployment": true, "docker": true, "fsid": "78ace352-763a-11e8-9c1d-525400166144", "generate_fsid": false, "ip_version": "ipv4", "keys": [{"key": "AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r", "name": "client.openstack", "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"}, {"key": "AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==", "mds_cap": "allow *", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "name": "client.manila", "osd_cap": "allow rw"}, {"key": "AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow rw", "name": "client.radosgw", "osd_cap": "allow rwx"}], "monitor_address_block": "172.17.3.0/24", "ntp_service_enabled": false, "openstack_config": true, "openstack_keys": [{"key": "AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r", "name": "client.openstack", "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"}, {"key": "AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==", "mds_cap": "allow *", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "name": "client.manila", "osd_cap": "allow rw"}, {"key": "AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow rw", "name": "client.radosgw", "osd_cap": "allow rwx"}], "openstack_pools": [{"application": "rbd", "name": "images", "pg_num": 32, "rule_name": ""}, {"application": "openstack_gnocchi", "name": "metrics", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "backups", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "vms", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "volumes", "pg_num": 32, "rule_name": ""}], "pools": [], "public_network": "172.17.3.0/24", "user_config": true}}, "changed": false} >2018-06-25 05:58:40,384 p=25239 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-25 05:58:40,733 p=25239 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "d2f81f55ed434d14c65add101a560597d9f2e352", "dest": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars/all.yml", "gid": 985, "group": "mistral", "md5sum": "8f2459663c2ce871f50b5a734e6633eb", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 3030, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529920720.42-155129047681275/source", "state": "file", "uid": 988} >2018-06-25 05:58:40,750 p=25239 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-25 05:58:40,779 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_extra_vars": {"fetch_directory": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir", "ireallymeanit": "yes"}}, "changed": false} >2018-06-25 05:58:40,796 p=25239 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-25 05:58:41,115 p=25239 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "27a7b301f57c037be576b40a8ea3e2361f006f51", "dest": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/extra_vars.yml", "gid": 985, "group": "mistral", "md5sum": "4517f7614662ee3c8ad601b8822d412d", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 115, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529920720.82-252106916688419/source", "state": "file", "uid": 988} >2018-06-25 05:58:41,132 p=25239 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-25 05:58:41,500 p=25239 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "0ed9243967d775f1d706f954c81c53dbea91f151", "dest": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/nodes_uuid_playbook.yml", "gid": 985, "group": "mistral", "md5sum": "afa7e006582a1713f57c3de7724c9f39", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 157, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529920721.21-63943392154247/source", "state": "file", "uid": 988} >2018-06-25 05:58:41,547 p=25239 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-25 05:58:41,565 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:41,582 p=25239 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-25 05:58:41,600 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:41,616 p=25239 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-25 05:58:41,637 p=25239 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-25 05:58:41,653 p=25239 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-25 05:58:41,680 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mgrs": {"ceph_mgr_docker_extra_env": "-e MGR_DASHBOARD=0"}}, "changed": false} >2018-06-25 05:58:41,696 p=25239 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-25 05:58:42,021 p=25239 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "06d130f3471f2ac09bb0161450878cf64bafd8af", "dest": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars/mgrs.yml", "gid": 985, "group": "mistral", "md5sum": "0d3c03a4186ad82120a728e0470a49d9", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 46, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529920721.72-174304117546169/source", "state": "file", "uid": 988} >2018-06-25 05:58:42,039 p=25239 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-25 05:58:42,070 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mons": {"admin_secret": "AQClJS1bAAAAABAATmDa/Tgwe4c7btmf4KFkJA==", "monitor_secret": "AQClJS1bAAAAABAAV94L3V8wTXJeGO2EyVOK9Q=="}}, "changed": false} >2018-06-25 05:58:42,086 p=25239 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-25 05:58:42,413 p=25239 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "03d76f5a9f0760d008e38cc79a5a787ae6f482ed", "dest": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars/mons.yml", "gid": 985, "group": "mistral", "md5sum": "6004ff3edcaab8f7e9252b388eb1850b", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 112, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529920722.12-278564496247398/source", "state": "file", "uid": 988} >2018-06-25 05:58:42,430 p=25239 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-25 05:58:42,458 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_clients": {}}, "changed": false} >2018-06-25 05:58:42,474 p=25239 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-25 05:58:42,794 p=25239 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars/clients.yml", "gid": 985, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529920722.5-132606704136791/source", "state": "file", "uid": 988} >2018-06-25 05:58:42,811 p=25239 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-25 05:58:42,840 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_osds": {"devices": ["/dev/vdb"], "journal_size": 512, "osd_objectstore": "filestore", "osd_scenario": "collocated"}}, "changed": false} >2018-06-25 05:58:42,857 p=25239 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-25 05:58:43,187 p=25239 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "454c7fd1ab87fd8f8ec07c9874039814cbe681cf", "dest": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars/osds.yml", "gid": 985, "group": "mistral", "md5sum": "e03a30f138554d36c1743c14fd3d9357", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 90, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529920722.89-152617371116870/source", "state": "file", "uid": 988} >2018-06-25 05:58:43,192 p=25239 u=mistral | PLAY [Overcloud deploy step tasks for 1] *************************************** >2018-06-25 05:58:43,217 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:58:43,269 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:43,280 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:43,345 p=25239 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-25 05:58:43,820 p=25239 u=mistral | changed: [controller-0] => {"changed": true} >2018-06-25 05:58:43,845 p=25239 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-25 05:58:44,534 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-25 05:58:44,556 p=25239 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-25 05:58:44,912 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:44,935 p=25239 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-25 05:58:45,387 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-25 05:58:45,409 p=25239 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-25 05:58:45,873 p=25239 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-25 05:58:45,895 p=25239 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-25 05:58:46,283 p=25239 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-25 05:58:46,304 p=25239 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-25 05:58:46,674 p=25239 u=mistral | changed: [controller-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:46,699 p=25239 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-25 05:58:47,375 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920726.74-20428087173606/source", "state": "file", "uid": 0} >2018-06-25 05:58:47,398 p=25239 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-25 05:58:47,770 p=25239 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-25 05:58:47,793 p=25239 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-25 05:58:48,137 p=25239 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-25 05:58:48,160 p=25239 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-25 05:58:48,517 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-25 05:58:48,542 p=25239 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-25 05:58:48,565 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:48,587 p=25239 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-25 05:58:49,013 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "name": null, "status": {}} >2018-06-25 05:58:49,037 p=25239 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-25 05:58:50,798 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "docker-storage-setup.service network.target system.slice systemd-journald.socket rhel-push-plugin.socket basic.target registries.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target rhel-push-plugin.socket registries.service docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-25 05:58:50,822 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:58:50,852 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:50,889 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:50,931 p=25239 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-25 05:58:51,304 p=25239 u=mistral | changed: [compute-0] => {"changed": true} >2018-06-25 05:58:51,324 p=25239 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-25 05:58:52,004 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-25 05:58:52,025 p=25239 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-25 05:58:52,386 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:52,405 p=25239 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-25 05:58:52,770 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-25 05:58:52,788 p=25239 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-25 05:58:53,163 p=25239 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-25 05:58:53,181 p=25239 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-25 05:58:53,553 p=25239 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-25 05:58:53,570 p=25239 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-25 05:58:53,949 p=25239 u=mistral | changed: [compute-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:53,974 p=25239 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-25 05:58:54,654 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920734.02-191255866566662/source", "state": "file", "uid": 0} >2018-06-25 05:58:54,671 p=25239 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-25 05:58:55,089 p=25239 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-25 05:58:55,107 p=25239 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-25 05:58:55,527 p=25239 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-25 05:58:55,543 p=25239 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-25 05:58:55,960 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-25 05:58:55,979 p=25239 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-25 05:58:56,003 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:56,021 p=25239 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-25 05:58:56,489 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "name": null, "status": {}} >2018-06-25 05:58:56,508 p=25239 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-25 05:58:58,250 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "registries.service systemd-journald.socket basic.target rhel-push-plugin.socket network.target docker-storage-setup.service system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target registries.service rhel-push-plugin.socket docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-25 05:58:58,275 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:58:58,303 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:58,327 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:58,338 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:58,359 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:58:58,385 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:58,407 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:58,419 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:58,441 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 05:58:58,466 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:58,488 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:58:58,541 p=25239 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-25 05:58:58,866 p=25239 u=mistral | changed: [ceph-0] => {"changed": true} >2018-06-25 05:58:58,884 p=25239 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-25 05:58:59,467 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-25 05:58:59,488 p=25239 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-25 05:58:59,813 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:58:59,833 p=25239 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-25 05:59:00,155 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-25 05:59:00,173 p=25239 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-25 05:59:00,495 p=25239 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-25 05:59:00,513 p=25239 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-25 05:59:00,839 p=25239 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-25 05:59:00,855 p=25239 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-25 05:59:01,175 p=25239 u=mistral | changed: [ceph-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:01,203 p=25239 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-25 05:59:01,764 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920741.25-100478523590662/source", "state": "file", "uid": 0} >2018-06-25 05:59:01,782 p=25239 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-25 05:59:02,109 p=25239 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-25 05:59:02,127 p=25239 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-25 05:59:02,447 p=25239 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-25 05:59:02,464 p=25239 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-25 05:59:02,788 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-25 05:59:02,807 p=25239 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-25 05:59:02,831 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:59:02,849 p=25239 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-25 05:59:03,232 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "name": null, "status": {}} >2018-06-25 05:59:03,251 p=25239 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-25 05:59:04,950 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "rhel-push-plugin.socket network.target system.slice registries.service systemd-journald.socket docker-storage-setup.service basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14904", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer registries.service basic.target rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-25 05:59:04,952 p=25239 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-06-25 05:59:07,676 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Mon 2018-06-25 05:58:50 EDT", "ActiveEnterTimestampMonotonic": "374516703", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "rhel-push-plugin.socket docker-storage-setup.service registries.service basic.target systemd-journald.socket network.target system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-06-25 05:58:49 EDT", "AssertTimestampMonotonic": "373326667", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-06-25 05:58:49 EDT", "ConditionTimestampMonotonic": "373326667", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "25605", "ExecMainStartTimestamp": "Mon 2018-06-25 05:58:49 EDT", "ExecMainStartTimestampMonotonic": "373328686", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Mon 2018-06-25 05:58:49 EDT] ; stop_time=[n/a] ; pid=25605 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-06-25 05:58:49 EDT", "InactiveExitTimestampMonotonic": "373328723", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "25605", "MemoryAccounting": "no", "MemoryCurrent": "65736704", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer basic.target registries.service rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "22", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Mon 2018-06-25 05:58:50 EDT", "WatchdogTimestampMonotonic": "374516650", "WatchdogUSec": "0"}} >2018-06-25 05:59:07,699 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Mon 2018-06-25 05:59:05 EDT", "ActiveEnterTimestampMonotonic": "476553194", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target registries.service systemd-journald.socket rhel-push-plugin.socket network.target docker-storage-setup.service system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-06-25 05:59:03 EDT", "AssertTimestampMonotonic": "475365620", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-06-25 05:59:03 EDT", "ConditionTimestampMonotonic": "475365620", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "16615", "ExecMainStartTimestamp": "Mon 2018-06-25 05:59:03 EDT", "ExecMainStartTimestampMonotonic": "475366649", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Mon 2018-06-25 05:59:03 EDT] ; stop_time=[n/a] ; pid=16615 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-06-25 05:59:03 EDT", "InactiveExitTimestampMonotonic": "475366679", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14904", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "16615", "MemoryAccounting": "no", "MemoryCurrent": "60268544", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer basic.target registries.service rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "16", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Mon 2018-06-25 05:59:05 EDT", "WatchdogTimestampMonotonic": "476552972", "WatchdogUSec": "0"}} >2018-06-25 05:59:07,717 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Mon 2018-06-25 05:58:58 EDT", "ActiveEnterTimestampMonotonic": "454973124", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "network.target system.slice rhel-push-plugin.socket registries.service systemd-journald.socket docker-storage-setup.service basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-06-25 05:58:57 EDT", "AssertTimestampMonotonic": "453791768", "Before": "multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-06-25 05:58:57 EDT", "ConditionTimestampMonotonic": "453791768", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "19365", "ExecMainStartTimestamp": "Mon 2018-06-25 05:58:57 EDT", "ExecMainStartTimestampMonotonic": "453793247", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Mon 2018-06-25 05:58:57 EDT] ; stop_time=[n/a] ; pid=19365 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-06-25 05:58:57 EDT", "InactiveExitTimestampMonotonic": "453793284", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "19365", "MemoryAccounting": "no", "MemoryCurrent": "63365120", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer rhel-push-plugin.socket basic.target registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Mon 2018-06-25 05:58:58 EDT", "WatchdogTimestampMonotonic": "454973074", "WatchdogUSec": "0"}} >2018-06-25 05:59:07,724 p=25239 u=mistral | PLAY [Overcloud common deploy step tasks 1] ************************************ >2018-06-25 05:59:07,750 p=25239 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-25 05:59:08,157 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:08,168 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:08,184 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:08,206 p=25239 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-25 05:59:08,911 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f8a32eb42203ada5e675fbde141df7f32100af5c", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "c727dc3c35ede89e7c3d894e3fb81da4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1588, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920748.31-81196153094755/source", "state": "file", "uid": 0} >2018-06-25 05:59:08,928 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "149113e83b0cb4d05192576bcff7b6fc0f316bd0", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "66bedc7c4ccee7cb079b118c09f8c08c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1630, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920748.25-199941273617580/source", "state": "file", "uid": 0} >2018-06-25 05:59:08,965 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "be3cadf4421fbe374d33f269513ff6e3f1c7ab66", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "86461fb932aeaba90516617c8168d5f2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1576, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920748.27-246743553238531/source", "state": "file", "uid": 0} >2018-06-25 05:59:08,991 p=25239 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-25 05:59:09,393 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-25 05:59:09,402 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-25 05:59:09,419 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-25 05:59:09,441 p=25239 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-25 05:59:10,157 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "c8d0c143121b7904490da6698d68f76bf1957b51", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "c6d9b1246ac65ebadc18213639c2431d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 234, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920749.53-159552756374649/source", "state": "file", "uid": 0} >2018-06-25 05:59:10,218 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "09cb610f7fea36dc33be3297b42ac38af987732e", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "e806efb887de6e5795dea0490c302e84", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2288, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920749.51-175498431170167/source", "state": "file", "uid": 0} >2018-06-25 05:59:10,221 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "c5bc7cf017025a018ebda9dd2ad6aac290a51bef", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "b53dfdbc008416d050550640e4219f21", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 13304, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920749.52-97120431289112/source", "state": "file", "uid": 0} >2018-06-25 05:59:10,244 p=25239 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-25 05:59:10,698 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:10,730 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:10,740 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:10,761 p=25239 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-25 05:59:11,210 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-25 05:59:11,247 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-25 05:59:11,260 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-25 05:59:11,283 p=25239 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-25 05:59:12,067 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": true, "checksum": "4e350e3d48cba294f2ccab34eb03c1dee23e7f82", "dest": "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "md5sum": "ed5dca102b28b4f992943612dee2dced", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920751.43-235815192824183/source", "state": "file", "uid": 0} >2018-06-25 05:59:12,091 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920751.42-275686084585719/source", "state": "file", "uid": 0} >2018-06-25 05:59:12,671 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": true, "checksum": "e77b96beec241bb84928d298a778521376225c0d", "dest": "/var/lib/docker-config-scripts/create_swift_secret.sh", "gid": 0, "group": "root", "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "md5sum": "9277d70c2fd62961998c5fce0a8aeee2", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1125, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920752.1-64453567378553/source", "state": "file", "uid": 0} >2018-06-25 05:59:13,264 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920752.7-232468266051286/source", "state": "file", "uid": 0} >2018-06-25 05:59:13,853 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': 'set_swift_keymaster_key_id.sh'}) => {"changed": true, "checksum": "9c2474fa6e4a8869674b689206eb1a1658a28fc6", "dest": "/var/lib/docker-config-scripts/set_swift_keymaster_key_id.sh", "gid": 0, "group": "root", "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "md5sum": "054225f8957e4457ef2103ce24d44b04", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1275, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920753.29-30237620094057/source", "state": "file", "uid": 0} >2018-06-25 05:59:14,504 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': 'docker_puppet_apply.sh'}) => {"changed": true, "checksum": "93afaa6df42c9ead7768b295fa901f83ae1b39ef", "dest": "/var/lib/docker-config-scripts/docker_puppet_apply.sh", "gid": 0, "group": "root", "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "md5sum": "709b2caef95cc7486f9b851414e71133", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 653, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920753.88-143873516875201/source", "state": "file", "uid": 0} >2018-06-25 05:59:15,114 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": true, "checksum": "0a839197c2fa15204014befb1c771a17aea5bdd1", "dest": "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "md5sum": "12a4a82656ddaae342942097b752d9db", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 442, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920754.53-260504745804375/source", "state": "file", "uid": 0} >2018-06-25 05:59:15,139 p=25239 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-25 05:59:15,227 p=25239 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,227 p=25239 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,228 p=25239 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,228 p=25239 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,237 p=25239 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,238 p=25239 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,242 p=25239 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,247 p=25239 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,253 p=25239 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,253 p=25239 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,258 p=25239 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,260 p=25239 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,262 p=25239 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,275 p=25239 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,280 p=25239 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,282 p=25239 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,288 p=25239 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,288 p=25239 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,312 p=25239 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-25 05:59:15,425 p=25239 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,452 p=25239 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,819 p=25239 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 05:59:15,839 p=25239 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-25 05:59:16,526 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ce9bc1dccca0cdcaa3098c1a790d78a8c694a5a4", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "ccd9b33a462e8e1243e2dc1f30301019", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1055, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920755.93-51686215840183/source", "state": "file", "uid": 0} >2018-06-25 05:59:16,573 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "81101164a2faf9d04b8327ba923e538c4a314d8a", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "c3906e7888630fe1d44fd18f8f2dc1b8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 11909, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920755.9-243257240087570/source", "state": "file", "uid": 0} >2018-06-25 05:59:16,600 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "95a58726961c52376ff077e9fadeda12c06eaf38", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "82edbb1555c2bdb58c0a7cb64109fc76", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 105573, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920755.89-213771197735773/source", "state": "file", "uid": 0} >2018-06-25 05:59:16,623 p=25239 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-25 05:59:17,340 p=25239 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920756.72-220184570847345/source", "state": "file", "uid": 0} >2018-06-25 05:59:17,385 p=25239 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920756.69-2536852427284/source", "state": "file", "uid": 0} >2018-06-25 05:59:17,432 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB', u'DB_ROOT_PASSWORD=ufdBL6tH5c'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": true, "checksum": "de79053487d5dcd69811f351dfb60260e23f46a2", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB", "DB_ROOT_PASSWORD=ufdBL6tH5c"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "md5sum": "acaec494d00331ed6f85bae3c1de3a74", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 7434, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920756.71-242004324003560/source", "state": "file", "uid": 0} >2018-06-25 05:59:17,978 p=25239 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920757.35-20695276147671/source", "state": "file", "uid": 0} >2018-06-25 05:59:18,098 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": true, "checksum": "7410b402d81937d9a195a3bf5e8207fa09cdb6e0", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "md5sum": "57cce5acf78ba9c384000a575f958249", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 5050, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920757.39-273670639799831/source", "state": "file", "uid": 0} >2018-06-25 05:59:18,178 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'fLWtJZCynkwHz2bnZopp1aRC2'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": true, "checksum": "c726fe0fd1030746652738c79b56209660701232", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "fLWtJZCynkwHz2bnZopp1aRC2"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "md5sum": "73a33c2e0e3ee0fdeaba1e0c392d5eb6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21820, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920757.44-191323559196860/source", "state": "file", "uid": 0} >2018-06-25 05:59:18,597 p=25239 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920757.99-41297887819782/source", "state": "file", "uid": 0} >2018-06-25 05:59:18,794 p=25239 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920758.1-14714484817508/source", "state": "file", "uid": 0} >2018-06-25 05:59:18,896 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": true, "checksum": "472db63cdc4afbedb8b7b8327abca190d5560eba", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "md5sum": "fdfa5f4ada0fb8f2a23d2165a5c7573a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 17318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920758.17-132764519095768/source", "state": "file", "uid": 0} >2018-06-25 05:59:19,226 p=25239 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920758.61-72753897136546/source", "state": "file", "uid": 0} >2018-06-25 05:59:19,495 p=25239 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920758.8-278950874394727/source", "state": "file", "uid": 0} >2018-06-25 05:59:19,590 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": true, "checksum": "2d471b8f626277a521f724dbfe2a127f4d22c5e8", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "md5sum": "78826763f44429e73ce20d41a74f403c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10552, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920758.9-168288302025123/source", "state": "file", "uid": 0} >2018-06-25 05:59:19,856 p=25239 u=mistral | changed: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "8acd94aee3f5b5403e8fb7f16593594f245dafee", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "md5sum": "2aaa44b365bea28e18d96f2f17bef412", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 973, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920759.24-58464838855830/source", "state": "file", "uid": 0} >2018-06-25 05:59:20,183 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "493a741aefb7f85135a9acae920c40e4d084ce8c", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "md5sum": "88242be594f2a379756c0534b729ae82", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6779, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920759.5-262846650401918/source", "state": "file", "uid": 0} >2018-06-25 05:59:20,336 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "a1be6aa2d4cc45e104b7c75319745196e636d5d2", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "md5sum": "1f138d32563935823e0ae333e7382fb3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 48375, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920759.6-59844640500395/source", "state": "file", "uid": 0} >2018-06-25 05:59:20,486 p=25239 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920759.86-40176613113198/source", "state": "file", "uid": 0} >2018-06-25 05:59:20,859 p=25239 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920760.19-104512580683144/source", "state": "file", "uid": 0} >2018-06-25 05:59:20,955 p=25239 u=mistral | changed: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920760.31-248553708819242/source", "state": "file", "uid": 0} >2018-06-25 05:59:21,058 p=25239 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-25 05:59:21,433 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:21,443 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:21,473 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-25 05:59:21,501 p=25239 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-25 05:59:22,216 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920761.6-229447081715219/source", "state": "file", "uid": 0} >2018-06-25 05:59:22,226 p=25239 u=mistral | changed: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920761.62-74946682612682/source", "state": "file", "uid": 0} >2018-06-25 05:59:22,457 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920761.79-201711431415762/source", "state": "file", "uid": 0} >2018-06-25 05:59:22,846 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920762.22-155406377247931/source", "state": "file", "uid": 0} >2018-06-25 05:59:23,120 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/keystone.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920762.46-69971032804957/source", "state": "file", "uid": 0} >2018-06-25 05:59:23,470 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": true, "checksum": "b50cbe1f8b020aa49249248b57310f45005813b3", "dest": "/var/lib/kolla/config_files/nova_libvirt.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "8356787bbcfcb5674a0bf2570719654a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 512, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920762.85-15620240639679/source", "state": "file", "uid": 0} >2018-06-25 05:59:23,767 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": true, "checksum": "0e697e31bdc439b99552bac9ffe0bab07f2af4a4", "dest": "/var/lib/kolla/config_files/cinder_backup.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "8e107eb8f6989be8375a0ff2dd5b4d57", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920763.13-163310710468859/source", "state": "file", "uid": 0} >2018-06-25 05:59:24,103 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": true, "checksum": "6a0a936a324363cd605e22c2327c17deb6dfbec2", "dest": "/var/lib/kolla/config_files/nova-migration-target.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "md5sum": "161558d57b182ca70c6f9bbd7fcbda8a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 258, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920763.48-179589773928162/source", "state": "file", "uid": 0} >2018-06-25 05:59:24,429 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920763.77-17540280613452/source", "state": "file", "uid": 0} >2018-06-25 05:59:24,730 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": true, "checksum": "8bbfe195e54ddfe481aaad9744174f7344d49681", "dest": "/var/lib/kolla/config_files/nova_virtlogd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "786b962e2df778e3ce02b185ef93deac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 193, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920764.11-248897215374235/source", "state": "file", "uid": 0} >2018-06-25 05:59:25,084 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": true, "checksum": "413730fbf3f7935085cfda60cbc1535d8bce0caf", "dest": "/var/lib/kolla/config_files/swift_account_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "dfccd947a56ceb6fa2b71c400281a365", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920764.44-95072955737813/source", "state": "file", "uid": 0} >2018-06-25 05:59:25,372 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920764.74-79956632576965/source", "state": "file", "uid": 0} >2018-06-25 05:59:25,726 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": true, "checksum": "2bf5ca66cb377c9fa3e6880f8b078d1312470cde", "dest": "/var/lib/kolla/config_files/swift_account_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d4a857b7e18f40f1cc1e6fd265c89770", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920765.09-187268161492774/source", "state": "file", "uid": 0} >2018-06-25 05:59:26,022 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": true, "checksum": "bb1c3bcd199b74791ea32746c08f4925a3b585a2", "dest": "/var/lib/kolla/config_files/nova_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "70b809037933259f45bb1585e9e6a4cc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 643, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920765.38-64809129927632/source", "state": "file", "uid": 0} >2018-06-25 05:59:26,377 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": true, "checksum": "e01d19d7f7cff24dfcc0d132b7d8ceabba199142", "dest": "/var/lib/kolla/config_files/aodh_notifier.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "5d4a748030a9a7476ccbd8902fb654fc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920765.74-57313327566996/source", "state": "file", "uid": 0} >2018-06-25 05:59:26,678 p=25239 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": true, "checksum": "4b3e97fcd87fd70b35934d1ef908747f302a4d11", "dest": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d91832a36a0ad3616a4e78c1af7d0db5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920766.03-255968010955626/source", "state": "file", "uid": 0} >2018-06-25 05:59:26,974 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": true, "checksum": "23416bae23a2c08d2c534f76d19f8c4bad40ee92", "dest": "/var/lib/kolla/config_files/nova_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "d00e4198d95dede3f0b6ac351d57a982", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920766.38-16130520244395/source", "state": "file", "uid": 0} >2018-06-25 05:59:27,540 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": true, "checksum": "a13a92b47f931e2e89d7e4bf5057a4307ab9cd45", "dest": "/var/lib/kolla/config_files/heat_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "e671c4783cc86fb2ad300fcd11b2f99b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920766.98-160323409473223/source", "state": "file", "uid": 0} >2018-06-25 05:59:28,112 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": true, "checksum": "da289f102f641cdd0a02df41c443d7d8387741a5", "dest": "/var/lib/kolla/config_files/neutron_dhcp.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "md5sum": "c5975567082648a9da814c433c49f2d6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 875, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920767.55-137523478037615/source", "state": "file", "uid": 0} >2018-06-25 05:59:28,693 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": true, "checksum": "0801385cb9292b3b6eb8440166435242bd90e288", "dest": "/var/lib/kolla/config_files/haproxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "md5sum": "a2742f7abd50bb0af0a4ba55b2f1f4ff", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 648, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920768.12-255785019671460/source", "state": "file", "uid": 0} >2018-06-25 05:59:29,279 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": true, "checksum": "c1a1552a71f4daefebff5234f9d8ba71f4c64d76", "dest": "/var/lib/kolla/config_files/nova_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "6b8ef057a2e5539eacd9f29fc4b94036", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920768.7-167887234012923/source", "state": "file", "uid": 0} >2018-06-25 05:59:29,837 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": true, "checksum": "a6d2eb62af2f11437c704d13adf72d498324ce2a", "dest": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "d586f0c2ff043bece10efff986d635a3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 531, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920769.29-3614973261138/source", "state": "file", "uid": 0} >2018-06-25 05:59:30,427 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": true, "checksum": "b061cf7478060add5d079aafaeae81b445251a8f", "dest": "/var/lib/kolla/config_files/swift_account_reaper.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "0f3bbe74ca95c8cca321ee32e2aff7d1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920769.85-77144388537760/source", "state": "file", "uid": 0} >2018-06-25 05:59:31,005 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": true, "checksum": "b7397fff831b47db0b6111663d816a64a389cb25", "dest": "/var/lib/kolla/config_files/sahara-engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "ac2c7a84fc46a1f1d128201ce5b67c2d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 360, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920770.44-166889921230070/source", "state": "file", "uid": 0} >2018-06-25 05:59:31,568 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": true, "checksum": "66d6d6bd51aaa0c100cdfc7688267a4342c7859f", "dest": "/var/lib/kolla/config_files/redis.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "md5sum": "ceafff1d742633f8759bdb1af0e3ebd4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 843, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920771.01-194470414701686/source", "state": "file", "uid": 0} >2018-06-25 05:59:32,129 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": true, "checksum": "b64555136537c36af22340fb15f21f0e01ac3495", "dest": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "557a4e9522f54cfbd6456516e67f4971", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 271, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920771.57-28708408800287/source", "state": "file", "uid": 0} >2018-06-25 05:59:32,693 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": true, "checksum": "2a93405ac579e31c6e5732983f3d7dd8bed55b33", "dest": "/var/lib/kolla/config_files/glance_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "30c5fe40dffc304e7edeab4019e96e92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 556, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920772.14-76269715205174/source", "state": "file", "uid": 0} >2018-06-25 05:59:33,259 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": true, "checksum": "739f6562d3ea24561c6d8bcf37041a9eac928257", "dest": "/var/lib/kolla/config_files/swift_container_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b63816c7c08aef58249d13b65b387da6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920772.7-224337055939501/source", "state": "file", "uid": 0} >2018-06-25 05:59:33,830 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": true, "checksum": "98adef088b2ae2648ac88b812890957ec54eff13", "dest": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "md5sum": "4a38c9578181c292891f5f7bdb9f791b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 428, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920773.27-220063333506063/source", "state": "file", "uid": 0} >2018-06-25 05:59:34,419 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": true, "checksum": "ebbb7ee6895cea2b9278f33e888881d3d3f1a68a", "dest": "/var/lib/kolla/config_files/swift_object_expirer.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "e4bf891d8ffc9a015be201a6ef0d5abc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920773.84-166839248292346/source", "state": "file", "uid": 0} >2018-06-25 05:59:34,979 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": true, "checksum": "53d52f7d52f0fb3da33de2c20414eb3248593fdd", "dest": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "2863f917d7ada51e9570fb53bb363eed", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920774.43-20596700411100/source", "state": "file", "uid": 0} >2018-06-25 05:59:35,539 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920774.99-41646915833898/source", "state": "file", "uid": 0} >2018-06-25 05:59:36,096 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": true, "checksum": "44a8f1a58092190d553d3f589cab9ae566f8dc81", "dest": "/var/lib/kolla/config_files/swift_rsync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "886febadf691905adf0c129f3aa0197a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920775.55-44911368303159/source", "state": "file", "uid": 0} >2018-06-25 05:59:36,649 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": true, "checksum": "279b64a7d6914d2a03c86c703f53e3d71b1daef1", "dest": "/var/lib/kolla/config_files/swift_account_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b41d67c146c800142c5405fe5a0b332e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920776.11-216226187288892/source", "state": "file", "uid": 0} >2018-06-25 05:59:37,197 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": true, "checksum": "06055a69fec2bc513b4c86ceb654a5fc29bd0866", "dest": "/var/lib/kolla/config_files/cinder_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "801aba1299d99bfd7e63f66ca7a4ba40", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920776.66-193460528875295/source", "state": "file", "uid": 0} >2018-06-25 05:59:37,746 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": true, "checksum": "a0874b803c5238a4eeb12b1265d5d1db93c0d3d4", "dest": "/var/lib/kolla/config_files/swift_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a38e4e3ae519b3b0824e19184e521b36", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920777.2-248446193141964/source", "state": "file", "uid": 0} >2018-06-25 05:59:38,308 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": true, "checksum": "8dbfc3669a6d79fb30702be502ced7501500480a", "dest": "/var/lib/kolla/config_files/swift_container_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a697319d04392dc572dff6236144571f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920777.75-240452849012403/source", "state": "file", "uid": 0} >2018-06-25 05:59:38,842 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": true, "checksum": "3c87335a28b992f90769aea9ea62fb610f8236f1", "dest": "/var/lib/kolla/config_files/clustercheck.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d74434e7b8bcaca0b227152346c13db8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 165, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920778.31-92972072265381/source", "state": "file", "uid": 0} >2018-06-25 05:59:39,386 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": true, "checksum": "b52f0d28ed1ac134c64994c08b3f2378e8dff494", "dest": "/var/lib/kolla/config_files/mysql.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "md5sum": "4d15ed291dbe96e88b9a128b0e5c99e9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 687, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920778.85-276370907157663/source", "state": "file", "uid": 0} >2018-06-25 05:59:39,933 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_placement.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920779.39-224384731681463/source", "state": "file", "uid": 0} >2018-06-25 05:59:40,478 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": true, "checksum": "fd070eb1bdc97442fddc24f503fe5e3251b89e28", "dest": "/var/lib/kolla/config_files/sahara-api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "bd52668d37c227cc00c418bbe889ab90", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 357, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920779.94-126464529392076/source", "state": "file", "uid": 0} >2018-06-25 05:59:41,010 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": true, "checksum": "f4177197cb07127689ae10a60020efa3a5e0d457", "dest": "/var/lib/kolla/config_files/aodh_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "582326e52a94260e71a4a19dc4d75191", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920780.48-30590807256730/source", "state": "file", "uid": 0} >2018-06-25 05:59:41,546 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": true, "checksum": "815ba71e0584cb12e7d40f794603c6bfb1800626", "dest": "/var/lib/kolla/config_files/keystone_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "md5sum": "b3b3bbd6499e09c424665311a5e66136", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 252, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920781.02-276132594900017/source", "state": "file", "uid": 0} >2018-06-25 05:59:42,082 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920781.55-17984677189915/source", "state": "file", "uid": 0} >2018-06-25 05:59:42,615 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": true, "checksum": "659d25615392d81b2f6bc001067232495de4d6ac", "dest": "/var/lib/kolla/config_files/swift_object_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "cdea8a372a87263d5fc44b482867a705", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 201, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920782.09-101513844738662/source", "state": "file", "uid": 0} >2018-06-25 05:59:43,138 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": true, "checksum": "01a54792c74d0ebd057e8d0f44e6e8e619283e62", "dest": "/var/lib/kolla/config_files/nova_conductor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "ccbba0ad7a926ceca2bf858b8a9cc376", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920782.62-248907380734429/source", "state": "file", "uid": 0} >2018-06-25 05:59:43,686 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api_cfn.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920783.15-42753555313221/source", "state": "file", "uid": 0} >2018-06-25 05:59:44,228 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": true, "checksum": "edb529183cc509ea82818edf4d88e3650b5ffc57", "dest": "/var/lib/kolla/config_files/nova_metadata.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "45129bd8b5b9aef067edb558a9fb2c68", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 249, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920783.69-206241735387390/source", "state": "file", "uid": 0} >2018-06-25 05:59:44,765 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920784.23-234620630686033/source", "state": "file", "uid": 0} >2018-06-25 05:59:45,292 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": true, "checksum": "205ddacf194881a04c54779e3049b3c59ef6c4af", "dest": "/var/lib/kolla/config_files/rabbitmq.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "md5sum": "1097dade2a2355fd51207668004d093d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 792, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920784.77-59592148104212/source", "state": "file", "uid": 0} >2018-06-25 05:59:45,828 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": true, "checksum": "a960878859377dfae6334d9b7eaa9f554ab31798", "dest": "/var/lib/kolla/config_files/nova_consoleauth.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "2a66fc646aae3e5913e0598ccef3881f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 248, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920785.3-150927615813291/source", "state": "file", "uid": 0} >2018-06-25 05:59:46,372 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": true, "checksum": "4f7a34f38afe301f885e25eb10225c461ab1d0b1", "dest": "/var/lib/kolla/config_files/swift_object_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "71a7e788486d505cfec645da0ac337cd", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920785.83-234824426222051/source", "state": "file", "uid": 0} >2018-06-25 05:59:46,912 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": true, "checksum": "5a73d3b7ef652341120c9298683d3a26f3fb668b", "dest": "/var/lib/kolla/config_files/neutron_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "c48346aa3f8c096826ebab378db9dfb9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 549, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920786.38-122207182785027/source", "state": "file", "uid": 0} >2018-06-25 05:59:47,451 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": true, "checksum": "9ec49193a63036ecf32a1479eabdac05dcab06e0", "dest": "/var/lib/kolla/config_files/cinder_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "93e9da0d08550be0ed30576cefdfbfbb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 340, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920786.92-41151304314785/source", "state": "file", "uid": 0} >2018-06-25 05:59:47,999 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": true, "checksum": "c8763a8c16702042afe553b54212340d800e1509", "dest": "/var/lib/kolla/config_files/gnocchi_metricd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "db9bd25aa2fcd2845d442869e986e7d8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 471, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920787.46-193889488531464/source", "state": "file", "uid": 0} >2018-06-25 05:59:48,557 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": true, "checksum": "fe01b9d48d08f239bbf9acf7e2a1492397180c8e", "dest": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "a26f6acfc823d6e2e5b34367b859c8fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 617, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920788.01-179677968870195/source", "state": "file", "uid": 0} >2018-06-25 05:59:49,114 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": true, "checksum": "a418eddca731078cfd8fe2fda7ee64d9ffaf7dda", "dest": "/var/lib/kolla/config_files/swift_container_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "930bbe0f8c13b55f664fb3a89dfa1613", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 207, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920788.56-46783089384559/source", "state": "file", "uid": 0} >2018-06-25 05:59:49,673 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": true, "checksum": "fe3989178a2ea434bae6dfd64b04423e3ea005bc", "dest": "/var/lib/kolla/config_files/heat_engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "aee05ebc54399dde3dfc3577c3431a92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 322, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920789.12-90460217692857/source", "state": "file", "uid": 0} >2018-06-25 05:59:50,248 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920789.68-108548343987986/source", "state": "file", "uid": 0} >2018-06-25 05:59:50,834 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": true, "checksum": "460cdcfbcfac45a30b03df89ac84d2f34db64d72", "dest": "/var/lib/kolla/config_files/swift_object_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "md5sum": "b00c233fd2cd32c68e429e42918b8245", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920790.26-164724373326231/source", "state": "file", "uid": 0} >2018-06-25 05:59:51,403 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": true, "checksum": "80800f9f267aaf3497499af70b7945e3b6ae771b", "dest": "/var/lib/kolla/config_files/redis_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "c45d2764863cc585b994d432412ff9e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 172, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920790.84-118871370473045/source", "state": "file", "uid": 0} >2018-06-25 05:59:51,995 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": true, "checksum": "39f33531116fbcba7a5d9c1cbbc32f4af5e6b981", "dest": "/var/lib/kolla/config_files/gnocchi_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "5e924ffe736d942bf904a791bf5b5af2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 475, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920791.41-234515856997665/source", "state": "file", "uid": 0} >2018-06-25 05:59:52,583 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": true, "checksum": "7f36445e4c6eb403ce919ca3adee771d4cb3bcce", "dest": "/var/lib/kolla/config_files/cinder_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "bb3e2e5741eb3e5b6c53da835e66d00d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920792.0-258806029170619/source", "state": "file", "uid": 0} >2018-06-25 05:59:53,159 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": true, "checksum": "e800a0e1c86f8fa7a41efbf24ce38f48a458ba51", "dest": "/var/lib/kolla/config_files/cinder_volume.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "a85ec43ba623807ac022c04663fa68f5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 579, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920792.59-16247640918455/source", "state": "file", "uid": 0} >2018-06-25 05:59:53,711 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": true, "checksum": "2db8f01174b9c2aa3a180add472b54891aed5cd6", "dest": "/var/lib/kolla/config_files/panko_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "md5sum": "7d9530934c938a4c96f71797957f7ca8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920793.17-131492241613575/source", "state": "file", "uid": 0} >2018-06-25 05:59:54,279 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": true, "checksum": "fbcdad9219733b81ad969426553906c1a8648897", "dest": "/var/lib/kolla/config_files/swift_object_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "45f7348541b64a76aec07477ea1d7358", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920793.72-236217661269810/source", "state": "file", "uid": 0} >2018-06-25 05:59:54,827 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": true, "checksum": "cd233477dc9defd8028ac1a8fe736b8c9fcde9f8", "dest": "/var/lib/kolla/config_files/neutron_l3_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "b47a8dc2601f0e1c404b9009d1c99c32", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 634, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920794.29-16567935889954/source", "state": "file", "uid": 0} >2018-06-25 05:59:55,372 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": true, "checksum": "a7135286aba5eb111dc77c913fc1f7dc0977e783", "dest": "/var/lib/kolla/config_files/aodh_listener.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "ff2b7ae2bb8061a36a8223f5c34a970b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920794.83-172654530339195/source", "state": "file", "uid": 0} >2018-06-25 05:59:55,918 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": true, "checksum": "1f5cc060becbca7be3515f39537993b91e109a6d", "dest": "/var/lib/kolla/config_files/swift_container_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "59a9944c2c3c07fec0293d2efd7d8082", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920795.38-136531220093042/source", "state": "file", "uid": 0} >2018-06-25 05:59:56,455 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": true, "checksum": "596ee1b7f45471d04a0bc3d985f82ad722631b98", "dest": "/var/lib/kolla/config_files/aodh_evaluator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "94c5432632bf2acca69de0063414183b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 245, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920795.93-71931492310383/source", "state": "file", "uid": 0} >2018-06-25 05:59:56,997 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920796.46-128578439405022/source", "state": "file", "uid": 0} >2018-06-25 05:59:57,534 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920797.01-147779025034613/source", "state": "file", "uid": 0} >2018-06-25 05:59:58,082 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": true, "checksum": "1a38774f0fed561a8f1ad8c7f0a976a71a7f7008", "dest": "/var/lib/kolla/config_files/gnocchi_statsd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "b98425b2f26d4e30448a72685b1f89ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 470, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920797.54-227211257219024/source", "state": "file", "uid": 0} >2018-06-25 05:59:58,623 p=25239 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": true, "checksum": "fc55910103403d0bb92e62e940dbd536aff43f84", "dest": "/var/lib/kolla/config_files/horizon.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "md5sum": "77504b6ea1f544f3c70dbc4115bfc354", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 587, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920798.09-87338865789606/source", "state": "file", "uid": 0} >2018-06-25 05:59:58,683 p=25239 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-25 05:59:58,697 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 05:59:58,720 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 05:59:58,747 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 05:59:58,774 p=25239 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-25 05:59:59,375 p=25239 u=mistral | changed: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": true, "checksum": "730e4e048205e1fadc6cd518326d4622d77edad6", "dest": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "md5sum": "56e31c6a27d11dc618833f5679009c9d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920798.83-143448144099037/source", "state": "file", "uid": 0} >2018-06-25 05:59:59,401 p=25239 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-25 05:59:59,432 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:59:59,460 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:59:59,476 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 05:59:59,499 p=25239 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-25 06:00:00,204 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920799.54-185308729061961/source", "state": "file", "uid": 0} >2018-06-25 06:00:00,232 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920799.6-62858740991163/source", "state": "file", "uid": 0} >2018-06-25 06:00:00,233 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529920799.57-151232894586253/source", "state": "file", "uid": 0} >2018-06-25 06:00:00,258 p=25239 u=mistral | TASK [Run puppet host configuration for step 1] ******************************** >2018-06-25 06:00:14,831 p=25239 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:00:17,266 p=25239 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:01:27,097 p=25239 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:01:27,119 p=25239 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** >2018-06-25 06:01:27,247 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.85 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}cd4f1c81e5026d1ca57a74ef5c2b2fa6'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}050dd67b736b9b417ae97c822e4867ca'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 76.96 seconds", > "Changes:", > " Total: 166", > "Events:", > " Success: 166", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 216", > " Restarted: 5", > "Time:", > " Concat file: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " User: 0.05", > " Sysctl: 0.17", > " File: 0.18", > " Sysctl runtime: 0.23", > " Package: 0.41", > " Pcmk property: 1.07", > " Firewall: 15.66", > " Last run: 1529920886", > " Service: 2.81", > " Config retrieval: 3.28", > " Exec: 53.26", > " Total: 77.15", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1529920806", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:01:27,266 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.86 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}cd4f1c81e5026d1ca57a74ef5c2b2fa6'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 8.21 seconds", > "Changes:", > " Total: 98", > "Events:", > " Success: 98", > "Resources:", > " Total: 141", > " Restarted: 3", > " Out of sync: 98", > " Changed: 98", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.07", > " File: 0.16", > " Package: 0.26", > " Sysctl runtime: 0.28", > " Service: 1.29", > " Last run: 1529920816", > " Exec: 2.12", > " Config retrieval: 2.19", > " Firewall: 2.69", > " Total: 9.09", > " Concat fragment: 0.00", > "Version:", > " Config: 1529920806", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:01:27,287 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.82 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}cd4f1c81e5026d1ca57a74ef5c2b2fa6'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 6.50 seconds", > "Changes:", > " Total: 92", > "Events:", > " Success: 92", > "Resources:", > " Total: 135", > " Restarted: 3", > " Out of sync: 92", > " Changed: 92", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " File: 0.13", > " Sysctl: 0.14", > " Sysctl runtime: 0.18", > " Package: 0.22", > " Service: 1.17", > " Firewall: 1.54", > " Exec: 1.94", > " Last run: 1529920814", > " Config retrieval: 2.09", > " Total: 7.43", > " Filebucket: 0.00", > "Version:", > " Config: 1529920806", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:01:27,311 p=25239 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 1] ***************** >2018-06-25 06:01:47,385 p=25239 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:02:19,469 p=25239 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:04:04,351 p=25239 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:04:04,372 p=25239 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** >2018-06-25 06:04:04,501 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-25 10:01:27,985 INFO: 20439 -- Running docker-puppet", > "2018-06-25 10:01:27,985 DEBUG: 20439 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-25 10:01:27,985 DEBUG: 20439 -- config_volume crond", > "2018-06-25 10:01:27,986 DEBUG: 20439 -- puppet_tags ", > "2018-06-25 10:01:27,986 DEBUG: 20439 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-25 10:01:27,986 DEBUG: 20439 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:27,986 DEBUG: 20439 -- volumes []", > "2018-06-25 10:01:27,986 DEBUG: 20439 -- Adding new service", > "2018-06-25 10:01:27,986 INFO: 20439 -- Service compilation completed.", > "2018-06-25 10:01:27,986 DEBUG: 20439 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-25 10:01:27,986 INFO: 20439 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-25 10:01:28,000 INFO: 20441 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:28,000 DEBUG: 20441 -- config_volume crond", > "2018-06-25 10:01:28,001 DEBUG: 20441 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-25 10:01:28,001 DEBUG: 20441 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-25 10:01:28,001 DEBUG: 20441 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:28,001 DEBUG: 20441 -- volumes []", > "2018-06-25 10:01:28,002 INFO: 20441 -- Removing container: docker-puppet-crond", > "2018-06-25 10:01:28,107 INFO: 20441 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:40,690 DEBUG: 20441 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Waiting", > "121ab4741000: Verifying Checksum", > "121ab4741000: Download complete", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "", > "2018-06-25 10:01:40,694 DEBUG: 20441 -- NET_HOST enabled", > "2018-06-25 10:01:40,694 DEBUG: 20441 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=ceph-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp8cGzlM:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:47,399 DEBUG: 20441 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 0.50 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.03 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.56", > " Total: 0.57", > " Last run: 1529920906", > "Version:", > " Config: 1529920906", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-25 10:01:40.911714362 +0000", > "2018-06-25 10:01:47,399 DEBUG: 20441 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=ceph-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:01:40.911714362 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ md5sum", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "tar: Removing leading `/' from member names+ md5sum", > "2018-06-25 10:01:47,399 INFO: 20441 -- Removing container: docker-puppet-crond", > "2018-06-25 10:01:47,436 DEBUG: 20441 -- docker-puppet-crond", > "2018-06-25 10:01:47,437 INFO: 20441 -- Finished processing puppet configs for crond", > "2018-06-25 10:01:47,437 DEBUG: 20439 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-25 10:01:47,438 DEBUG: 20439 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-25 10:01:47,441 DEBUG: 20439 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-25 10:01:47,441 DEBUG: 20439 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-25 10:01:47,441 DEBUG: 20439 -- Updating config hash for logrotate_crond, config_volume=crond hash=56f5cabe5f6214c72ce8d92772328b19" > ] >} >2018-06-25 06:04:04,848 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-25 10:01:27,904 INFO: 24483 -- Running docker-puppet", > "2018-06-25 10:01:27,904 DEBUG: 24483 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-25 10:01:27,904 DEBUG: 24483 -- config_volume ceilometer", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- puppet_tags ceilometer_config", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- volumes []", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- Adding new service", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- config_volume neutron", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- puppet_tags neutron_plugin_ml2", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-25 10:01:27,905 DEBUG: 24483 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- config_volume iscsid", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- puppet_tags iscsid_config", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- Adding new service", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- config_volume nova_libvirt", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- puppet_tags nova_config,nova_paste_api_ini", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "# We'll probably treat it like we do with Neutron plugins.", > "# Until then, just include it in the default nova-compute role.", > "include tripleo::profile::base::nova::compute::libvirt", > "include ::tripleo::profile::base::database::mysql::client", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- volumes []", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- puppet_tags libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-06-25 10:01:27,906 DEBUG: 24483 -- manifest include tripleo::profile::base::nova::libvirt", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- volumes []", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- config_volume nova_libvirt", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- puppet_tags ", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- manifest include ::tripleo::profile::base::sshd", > "include tripleo::profile::base::nova::migration::target", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- config_volume crond", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:27,907 DEBUG: 24483 -- Adding new service", > "2018-06-25 10:01:27,907 INFO: 24483 -- Service compilation completed.", > "2018-06-25 10:01:27,908 DEBUG: 24483 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', []]", > "2018-06-25 10:01:27,908 DEBUG: 24483 -- - [u'nova_libvirt', u'file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password', u\"# TODO(emilien): figure how to deal with libvirt profile.\\n# We'll probably treat it like we do with Neutron plugins.\\n# Until then, just include it in the default nova-compute role.\\ninclude tripleo::profile::base::nova::compute::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sshd\\ninclude tripleo::profile::base::nova::migration::target\", u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', []]", > "2018-06-25 10:01:27,908 DEBUG: 24483 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-25 10:01:27,908 DEBUG: 24483 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-06-25 10:01:27,908 DEBUG: 24483 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', [u'/etc/iscsi:/etc/iscsi']]", > "2018-06-25 10:01:27,908 INFO: 24483 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-25 10:01:27,923 INFO: 24484 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:01:27,924 INFO: 24485 -- Starting configuration of nova_libvirt using image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-25 10:01:27,924 INFO: 24486 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:27,924 DEBUG: 24484 -- config_volume ceilometer", > "2018-06-25 10:01:27,924 DEBUG: 24485 -- config_volume nova_libvirt", > "2018-06-25 10:01:27,924 DEBUG: 24486 -- config_volume crond", > "2018-06-25 10:01:27,924 DEBUG: 24484 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config", > "2018-06-25 10:01:27,924 DEBUG: 24485 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-06-25 10:01:27,924 DEBUG: 24484 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-06-25 10:01:27,924 DEBUG: 24486 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-25 10:01:27,924 DEBUG: 24485 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "include tripleo::profile::base::nova::libvirt", > "include ::tripleo::profile::base::sshd", > "2018-06-25 10:01:27,924 DEBUG: 24484 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:01:27,924 DEBUG: 24486 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-25 10:01:27,924 DEBUG: 24485 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-25 10:01:27,924 DEBUG: 24484 -- volumes []", > "2018-06-25 10:01:27,924 DEBUG: 24486 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:27,924 DEBUG: 24485 -- volumes []", > "2018-06-25 10:01:27,924 DEBUG: 24486 -- volumes []", > "2018-06-25 10:01:27,926 INFO: 24484 -- Removing container: docker-puppet-ceilometer", > "2018-06-25 10:01:27,926 INFO: 24485 -- Removing container: docker-puppet-nova_libvirt", > "2018-06-25 10:01:27,926 INFO: 24486 -- Removing container: docker-puppet-crond", > "2018-06-25 10:01:28,021 INFO: 24486 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:28,026 INFO: 24485 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-25 10:01:28,027 INFO: 24484 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:01:40,713 DEBUG: 24486 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Waiting", > "121ab4741000: Verifying Checksum", > "121ab4741000: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:40,717 DEBUG: 24486 -- NET_HOST enabled", > "2018-06-25 10:01:40,717 DEBUG: 24486 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpuneq4L:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:47,807 DEBUG: 24484 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "c66228eb2ac7: Pulling fs layer", > "333aa6b2b383: Pulling fs layer", > "1eb9ef5adcb4: Pulling fs layer", > "c66228eb2ac7: Waiting", > "1eb9ef5adcb4: Waiting", > "333aa6b2b383: Waiting", > "c66228eb2ac7: Verifying Checksum", > "c66228eb2ac7: Download complete", > "333aa6b2b383: Verifying Checksum", > "333aa6b2b383: Download complete", > "1eb9ef5adcb4: Verifying Checksum", > "1eb9ef5adcb4: Download complete", > "c66228eb2ac7: Pull complete", > "333aa6b2b383: Pull complete", > "1eb9ef5adcb4: Pull complete", > "Digest: sha256:3f638e03aaf1d7e303183e06ff1627a5a0efeaef228a7be1e9667ae62d7d6a1b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:01:47,812 DEBUG: 24484 -- NET_HOST enabled", > "2018-06-25 10:01:47,812 DEBUG: 24484 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config --env NAME=ceilometer --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpPpTzTl:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:01:49,774 DEBUG: 24486 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.52 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.31 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.01", > " Cron: 0.02", > " Config retrieval: 0.63", > " Total: 0.66", > " Last run: 1529920908", > "Version:", > " Config: 1529920907", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-25 10:01:41.096814383 +0000", > "2018-06-25 10:01:49,775 DEBUG: 24486 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=compute-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:01:41.096814383 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ awk '{print $1}'", > "+ md5sum", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-25 10:01:49,775 INFO: 24486 -- Removing container: docker-puppet-crond", > "2018-06-25 10:01:49,832 DEBUG: 24486 -- docker-puppet-crond", > "2018-06-25 10:01:49,832 INFO: 24486 -- Finished processing puppet configs for crond", > "2018-06-25 10:01:49,834 INFO: 24486 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:01:49,834 DEBUG: 24486 -- config_volume neutron", > "2018-06-25 10:01:49,834 DEBUG: 24486 -- puppet_tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-25 10:01:49,834 DEBUG: 24486 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "include ::tripleo::profile::base::neutron::ovs", > "2018-06-25 10:01:49,834 DEBUG: 24486 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:01:49,834 DEBUG: 24486 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-25 10:01:49,835 INFO: 24486 -- Removing container: docker-puppet-neutron", > "2018-06-25 10:01:49,951 INFO: 24486 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:01:55,011 DEBUG: 24486 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "ea1d509b6f44: Pulling fs layer", > "e9f9993bb931: Pulling fs layer", > "e9f9993bb931: Verifying Checksum", > "e9f9993bb931: Download complete", > "ea1d509b6f44: Verifying Checksum", > "ea1d509b6f44: Download complete", > "ea1d509b6f44: Pull complete", > "e9f9993bb931: Pull complete", > "Digest: sha256:af12594500608f07f8d38590e2c9b2983e5d81ae8b63aec042f36411b0e76adc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:01:55,016 DEBUG: 24486 -- NET_HOST enabled", > "2018-06-25 10:01:55,017 DEBUG: 24486 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpmR8ABL:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:01:57,566 DEBUG: 24484 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.28 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/metering_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 1.01 seconds", > " Total: 29", > " Success: 29", > " Total: 141", > " Skipped: 22", > " Out of sync: 29", > " Changed: 29", > " Ceilometer config: 0.89", > " Config retrieval: 1.53", > " Last run: 1529920916", > " Total: 2.42", > " Resources: 0.00", > " Config: 1529920914", > "Gathering files modified after 2018-06-25 10:01:48.090801043 +0000", > "2018-06-25 10:01:57,567 DEBUG: 24484 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:01:48.090801043 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-06-25 10:01:57,567 INFO: 24484 -- Removing container: docker-puppet-ceilometer", > "2018-06-25 10:01:57,619 DEBUG: 24484 -- docker-puppet-ceilometer", > "2018-06-25 10:01:57,619 INFO: 24484 -- Finished processing puppet configs for ceilometer", > "2018-06-25 10:01:57,619 INFO: 24484 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:01:57,620 DEBUG: 24484 -- config_volume iscsid", > "2018-06-25 10:01:57,620 DEBUG: 24484 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-06-25 10:01:57,620 DEBUG: 24484 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-25 10:01:57,620 DEBUG: 24484 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:01:57,620 DEBUG: 24484 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-25 10:01:57,620 INFO: 24484 -- Removing container: docker-puppet-iscsid", > "2018-06-25 10:01:57,715 INFO: 24484 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:01:58,395 DEBUG: 24484 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "ab4eae34093d: Pulling fs layer", > "ab4eae34093d: Verifying Checksum", > "ab4eae34093d: Download complete", > "ab4eae34093d: Pull complete", > "Digest: sha256:a46aa93fee87b0f173118da5c2a18dc271772adb839a481ec07f2a53534ac53c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:01:58,399 DEBUG: 24484 -- NET_HOST enabled", > "2018-06-25 10:01:58,399 DEBUG: 24484 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpERHduF:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:02:02,490 DEBUG: 24485 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-compute ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-compute", > "0e3031608420: Pulling fs layer", > "9c13697fe587: Pulling fs layer", > "0e3031608420: Waiting", > "9c13697fe587: Waiting", > "0e3031608420: Verifying Checksum", > "0e3031608420: Download complete", > "9c13697fe587: Verifying Checksum", > "9c13697fe587: Download complete", > "0e3031608420: Pull complete", > "9c13697fe587: Pull complete", > "Digest: sha256:c6b75506ba5602b470f8dbfdcc57e0bcd20fc363d265aa234469343e439fa65a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-25 10:02:02,494 DEBUG: 24485 -- NET_HOST enabled", > "2018-06-25 10:02:02,494 DEBUG: 24485 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_libvirt --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpEEbur5:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-25 10:02:05,113 DEBUG: 24484 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.53 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > "Notice: Applied catalog in 0.08 seconds", > " Total: 10", > " Skipped: 8", > " File: 0.00", > " Exec: 0.02", > " Config retrieval: 0.59", > " Total: 0.61", > " Last run: 1529920924", > " Config: 1529920923", > "Gathering files modified after 2018-06-25 10:01:58.632781846 +0000", > "2018-06-25 10:02:05,114 DEBUG: 24484 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:01:58.632781846 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-06-25 10:02:05,114 INFO: 24484 -- Removing container: docker-puppet-iscsid", > "2018-06-25 10:02:05,165 DEBUG: 24484 -- docker-puppet-iscsid", > "2018-06-25 10:02:05,165 INFO: 24484 -- Finished processing puppet configs for iscsid", > "2018-06-25 10:02:05,506 DEBUG: 24486 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.48 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 0.84 seconds", > " Total: 48", > " Success: 48", > " Total: 174", > " Skipped: 27", > " Out of sync: 48", > " Changed: 48", > " Neutron plugin ml2: 0.03", > " Neutron agent ovs: 0.09", > " Neutron config: 0.54", > " Config retrieval: 2.67", > " Total: 3.34", > " Config: 1529920920", > "Gathering files modified after 2018-06-25 10:01:55.255787885 +0000", > "2018-06-25 10:02:05,507 DEBUG: 24486 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 530]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ml2.pp\", 45]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 132]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync_srcs+=' /var/www'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:01:55.255787885 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-06-25 10:02:05,507 INFO: 24486 -- Removing container: docker-puppet-neutron", > "2018-06-25 10:02:05,557 DEBUG: 24486 -- docker-puppet-neutron", > "2018-06-25 10:02:05,557 INFO: 24486 -- Finished processing puppet configs for neutron", > "2018-06-25 10:02:19,322 DEBUG: 24485 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.39 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{md5}056b96e7e8124e1bc55f77cba4e68ce7' to '{md5}7d9300db2bd0e67b7ca469e6d48284c7'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{md5}09c4fa846e8e27bfa3ab3325900d63ea' to '{md5}2f138c0278e1b666ec77a6d8ba3054a1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{md5}dff145cb4e519333c0096aae8de2e77c' to '{md5}ad92fb2746961fc5cb63ffbe8b9193d9'", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/vncserver_proxyclient_address]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/keymap]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[glance/verify_glance_signatures]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tls]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tcp]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{md5}4d5412d6bfca879d8dc55ef542d76db6'", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/vncserver_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_group]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_ro]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_rw]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_ro_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_rw_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}5d943a01ffd64865ad5d5710b467b752'", > "Notice: Applied catalog in 7.65 seconds", > " Total: 103", > " Success: 103", > " Changed: 103", > " Out of sync: 103", > " Total: 313", > " Skipped: 47", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File line: 0.00", > " Exec: 0.01", > " Libvirtd config: 0.02", > " File: 0.03", > " Package: 0.08", > " Augeas: 0.67", > " Total: 10.10", > " Last run: 1529920937", > " Config retrieval: 2.74", > " Nova config: 6.54", > " Config: 1529920927", > "Gathering files modified after 2018-06-25 10:02:02.705774563 +0000", > "2018-06-25 10:02:19,322 DEBUG: 24485 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password'", > "+ origin_of_time=/var/lib/config-data/nova_libvirt.origin_of_time", > "+ touch /var/lib/config-data/nova_libvirt.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Unknown variable: '::nova::vncproxy::host'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:31:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_protocol'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:36:5", > "Warning: Unknown variable: '::nova::vncproxy::port'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:41:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_path'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:46:5", > "Warning: Unknown variable: '::nova::compute::pci_passthrough'. at /etc/puppet/modules/nova/manifests/compute/pci.pp:19:38", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/compute/libvirt.pp\", 278]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute/libvirt.pp\", 33]", > " with Stdlib::Compat::Ip_Address. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Exec[set libvirt sasl credentials](provider=posix): Cannot understand environment setting \"TLS_PASSWORD=\"", > "+ rsync_srcs+=' /var/lib/nova/.ssh'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/nova/.ssh /var/lib/config-data/nova_libvirt", > "++ stat -c %y /var/lib/config-data/nova_libvirt.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:02.705774563 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_libvirt", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_libvirt", > "++ find /etc /root /opt /var/spool/cron /var/lib/nova/.ssh -newer /var/lib/config-data/nova_libvirt.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_libvirt --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_libvirt --mtime=1970-01-01", > "2018-06-25 10:02:19,322 INFO: 24485 -- Removing container: docker-puppet-nova_libvirt", > "2018-06-25 10:02:19,368 DEBUG: 24485 -- docker-puppet-nova_libvirt", > "2018-06-25 10:02:19,368 INFO: 24485 -- Finished processing puppet configs for nova_libvirt", > "2018-06-25 10:02:19,369 DEBUG: 24483 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-25 10:02:19,369 DEBUG: 24483 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-25 10:02:19,373 DEBUG: 24483 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:02:19,373 DEBUG: 24483 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:02:19,373 DEBUG: 24483 -- Updating config hash for neutron_ovs_bridge, config_volume=iscsid hash=68c189a6e4c993b8096178be2394c374", > "2018-06-25 10:02:19,374 DEBUG: 24483 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-25 10:02:19,374 DEBUG: 24483 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-25 10:02:19,374 DEBUG: 24483 -- Updating config hash for nova_libvirt, config_volume=iscsid hash=b7d4b0d5df75da9e48b6ca6411edc7cc", > "2018-06-25 10:02:19,374 DEBUG: 24483 -- Updating config hash for nova_virtlogd, config_volume=iscsid hash=b7d4b0d5df75da9e48b6ca6411edc7cc", > "2018-06-25 10:02:19,375 DEBUG: 24483 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-25 10:02:19,376 DEBUG: 24483 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-25 10:02:19,376 DEBUG: 24483 -- Updating config hash for ceilometer_agent_compute, config_volume=iscsid hash=48ad475cf122e602f70ea9b2a5fedf4f", > "2018-06-25 10:02:19,376 DEBUG: 24483 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt/etc", > "2018-06-25 10:02:19,376 DEBUG: 24483 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:02:19,376 DEBUG: 24483 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:02:19,376 DEBUG: 24483 -- Updating config hash for neutron_ovs_agent, config_volume=iscsid hash=68c189a6e4c993b8096178be2394c374", > "2018-06-25 10:02:19,376 DEBUG: 24483 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-25 10:02:19,376 DEBUG: 24483 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-25 10:02:19,376 DEBUG: 24483 -- Updating config hash for nova_migration_target, config_volume=iscsid hash=b7d4b0d5df75da9e48b6ca6411edc7cc", > "2018-06-25 10:02:19,377 DEBUG: 24483 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-25 10:02:19,377 DEBUG: 24483 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-25 10:02:19,377 DEBUG: 24483 -- Updating config hash for nova_compute, config_volume=iscsid hash=b7d4b0d5df75da9e48b6ca6411edc7cc", > "2018-06-25 10:02:19,377 DEBUG: 24483 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-25 10:02:19,377 DEBUG: 24483 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-25 10:02:19,377 DEBUG: 24483 -- Updating config hash for logrotate_crond, config_volume=iscsid hash=c9f482ce7a193ef6abe1b65c33f3ec7f" > ] >} >2018-06-25 06:04:05,413 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-25 10:01:27,843 INFO: 9420 -- Running docker-puppet", > "2018-06-25 10:01:27,843 DEBUG: 9420 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-25 10:01:27,844 DEBUG: 9420 -- config_volume aodh", > "2018-06-25 10:01:27,844 DEBUG: 9420 -- puppet_tags aodh_api_paste_ini,aodh_config", > "2018-06-25 10:01:27,844 DEBUG: 9420 -- manifest include tripleo::profile::base::aodh::api", > "", > "include ::tripleo::profile::base::database::mysql::client", > "2018-06-25 10:01:27,844 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-25 10:01:27,844 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,844 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,844 DEBUG: 9420 -- puppet_tags aodh_config", > "2018-06-25 10:01:27,844 DEBUG: 9420 -- manifest include tripleo::profile::base::aodh::evaluator", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- config_volume aodh", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- puppet_tags aodh_config", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- manifest include tripleo::profile::base::aodh::listener", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- manifest include tripleo::profile::base::aodh::notifier", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- config_volume ceilometer", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- puppet_tags ceilometer_config", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:01:27,845 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- config_volume ceilometer", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- puppet_tags ceilometer_config", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- manifest include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- config_volume cinder", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- puppet_tags cinder_config,file,concat,file_line", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- manifest include ::tripleo::profile::base::cinder::api", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- manifest include ::tripleo::profile::base::cinder::backup::ceph", > "2018-06-25 10:01:27,846 DEBUG: 9420 -- manifest include ::tripleo::profile::base::cinder::scheduler", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- config_volume cinder", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- puppet_tags cinder_config,file,concat,file_line", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- manifest include ::tripleo::profile::base::lvm", > "include ::tripleo::profile::base::cinder::volume", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- config_volume clustercheck", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- puppet_tags file", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- config_volume glance_api", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- puppet_tags glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- manifest include ::tripleo::profile::base::glance::api", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-25 10:01:27,847 DEBUG: 9420 -- config_volume gnocchi", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- puppet_tags gnocchi_api_paste_ini,gnocchi_config", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- manifest include ::tripleo::profile::base::gnocchi::api", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- config_volume gnocchi", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- puppet_tags gnocchi_config", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- manifest include ::tripleo::profile::base::gnocchi::metricd", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- manifest include ::tripleo::profile::base::gnocchi::statsd", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- config_volume haproxy", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- puppet_tags haproxy_config", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}", > "['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::pacemaker::haproxy_bundle", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-25 10:01:27,848 DEBUG: 9420 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- config_volume heat_api", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- puppet_tags heat_config,file,concat,file_line", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- manifest include ::tripleo::profile::base::heat::api", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- config_volume heat_api_cfn", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- config_volume heat", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- config_volume horizon", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- puppet_tags horizon_config", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- manifest include ::tripleo::profile::base::horizon", > "2018-06-25 10:01:27,849 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- config_volume iscsid", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- puppet_tags iscsid_config", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- config_volume keystone", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- puppet_tags keystone_config,keystone_domain_config", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::keystone", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- config_volume memcached", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- puppet_tags file", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- manifest include ::tripleo::profile::base::memcached", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-25 10:01:27,850 DEBUG: 9420 -- config_volume mysql", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "exec {'wait-for-settle': command => '/bin/true' }", > "include ::tripleo::profile::pacemaker::database::mysql_bundle", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- config_volume neutron", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- puppet_tags neutron_config,neutron_api_config", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- manifest include tripleo::profile::base::neutron::server", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- puppet_tags neutron_plugin_ml2", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- puppet_tags neutron_config,neutron_dhcp_agent_config", > "2018-06-25 10:01:27,851 DEBUG: 9420 -- manifest include tripleo::profile::base::neutron::dhcp", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- puppet_tags neutron_config,neutron_l3_agent_config", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- manifest include tripleo::profile::base::neutron::l3", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- config_volume neutron", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- puppet_tags neutron_config,neutron_metadata_agent_config", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- manifest include tripleo::profile::base::neutron::metadata", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- config_volume nova", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- puppet_tags nova_config", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::api", > "2018-06-25 10:01:27,852 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- config_volume nova", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- puppet_tags nova_config", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- manifest include tripleo::profile::base::nova::conductor", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- manifest include tripleo::profile::base::nova::consoleauth", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- config_volume nova_placement", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- manifest include tripleo::profile::base::nova::placement", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-25 10:01:27,853 DEBUG: 9420 -- manifest include tripleo::profile::base::nova::scheduler", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- config_volume nova", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- puppet_tags nova_config", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- manifest include tripleo::profile::base::nova::vncproxy", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- config_volume crond", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- puppet_tags ", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- config_volume panko", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- puppet_tags panko_api_paste_ini,panko_config", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- manifest include tripleo::profile::base::panko::api", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-25 10:01:27,854 DEBUG: 9420 -- config_volume rabbitmq", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- puppet_tags file", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::rabbitmq", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- config_volume redis", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- puppet_tags exec", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- config_volume sahara", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- puppet_tags sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- manifest include ::tripleo::profile::base::sahara::api", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- puppet_tags sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- manifest include ::tripleo::profile::base::sahara::engine", > "2018-06-25 10:01:27,855 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- config_volume swift", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- puppet_tags swift_config,swift_proxy_config,swift_keymaster_config", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- manifest include ::tripleo::profile::base::swift::proxy", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- volumes []", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- Adding new service", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- config_volume swift_ringbuilder", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- puppet_tags exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- puppet_tags swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- manifest include ::tripleo::profile::base::swift::storage", > "class xinetd() {}", > "2018-06-25 10:01:27,856 DEBUG: 9420 -- Existing service, appending puppet tags and manifest", > "2018-06-25 10:01:27,856 INFO: 9420 -- Service compilation completed.", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'nova_placement', u'file,file_line,concat,augeas,cron,nova_config', u'include tripleo::profile::base::nova::placement\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'aodh', u'file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config', u'include tripleo::profile::base::aodh::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::evaluator\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::listener\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::notifier\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'heat_api', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'swift_ringbuilder', u'file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', u'include ::tripleo::profile::base::swift::ringbuilder', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'sahara', u'file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template', u'include ::tripleo::profile::base::sahara::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sahara::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'mysql', u'file,file_line,concat,augeas,cron,file', u\"['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\\nexec {'wait-for-settle': command => '/bin/true' }\\ninclude ::tripleo::profile::pacemaker::database::mysql_bundle\", u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'gnocchi', u'file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config', u'include ::tripleo::profile::base::gnocchi::api\\n\\ninclude ::tripleo::profile::base::gnocchi::metricd\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::gnocchi::statsd\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'clustercheck', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::pacemaker::clustercheck', u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'redis', u'file,file_line,concat,augeas,cron,exec', u'include ::tripleo::profile::pacemaker::database::redis_bundle', u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'nova', u'file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config', u\"['Nova_cell_v2'].each |String $val| { noop_resource($val) }\\ninclude tripleo::profile::base::nova::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::conductor\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::consoleauth\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::vncproxy\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,857 DEBUG: 9420 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', [u'/etc/iscsi:/etc/iscsi']]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'glance_api', u'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config', u'include ::tripleo::profile::base::glance::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'keystone', u'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', u\"['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::keystone\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'memcached', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::base::memcached\\n', u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'panko', u'file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config', u'include tripleo::profile::base::panko::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'heat', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'cinder', u'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', u'include ::tripleo::profile::base::cinder::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::backup::ceph\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::lvm\\ninclude ::tripleo::profile::base::cinder::volume\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'swift', u'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', u'include ::tripleo::profile::base::swift::proxy\\n\\ninclude ::tripleo::profile::base::swift::storage\\n\\nclass xinetd() {}', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'haproxy', u'file,file_line,concat,augeas,cron,haproxy_config', u\"exec {'wait-for-settle': command => '/bin/true' }\\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::pacemaker::haproxy_bundle\", u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n\\ninclude ::tripleo::profile::base::ceilometer::agent::notification\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'rabbitmq', u'file,file_line,concat,augeas,cron,file', u\"['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::rabbitmq\\n\", u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include tripleo::profile::base::neutron::server\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude tripleo::profile::base::neutron::dhcp\\n\\ninclude tripleo::profile::base::neutron::l3\\n\\ninclude tripleo::profile::base::neutron::metadata\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'horizon', u'file,file_line,concat,augeas,cron,horizon_config', u'include ::tripleo::profile::base::horizon\\n', u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 DEBUG: 9420 -- - [u'heat_api_cfn', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api_cfn\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', []]", > "2018-06-25 10:01:27,858 INFO: 9420 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-25 10:01:27,869 INFO: 9421 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-25 10:01:27,869 INFO: 9422 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:01:27,869 DEBUG: 9421 -- config_volume nova_placement", > "2018-06-25 10:01:27,869 DEBUG: 9422 -- config_volume swift_ringbuilder", > "2018-06-25 10:01:27,870 DEBUG: 9422 -- puppet_tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-06-25 10:01:27,870 DEBUG: 9421 -- puppet_tags file,file_line,concat,augeas,cron,nova_config", > "2018-06-25 10:01:27,870 DEBUG: 9421 -- manifest include tripleo::profile::base::nova::placement", > "2018-06-25 10:01:27,870 DEBUG: 9422 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-06-25 10:01:27,870 DEBUG: 9422 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:01:27,870 DEBUG: 9421 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-25 10:01:27,870 DEBUG: 9422 -- volumes []", > "2018-06-25 10:01:27,870 DEBUG: 9421 -- volumes []", > "2018-06-25 10:01:27,870 INFO: 9423 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-25 10:01:27,870 DEBUG: 9423 -- config_volume gnocchi", > "2018-06-25 10:01:27,870 DEBUG: 9423 -- puppet_tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config", > "2018-06-25 10:01:27,871 DEBUG: 9423 -- manifest include ::tripleo::profile::base::gnocchi::api", > "include ::tripleo::profile::base::gnocchi::metricd", > "include ::tripleo::profile::base::gnocchi::statsd", > "2018-06-25 10:01:27,871 DEBUG: 9423 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-25 10:01:27,871 DEBUG: 9423 -- volumes []", > "2018-06-25 10:01:27,872 INFO: 9422 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-06-25 10:01:27,872 INFO: 9421 -- Removing container: docker-puppet-nova_placement", > "2018-06-25 10:01:27,872 INFO: 9423 -- Removing container: docker-puppet-gnocchi", > "2018-06-25 10:01:27,956 INFO: 9421 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-25 10:01:27,956 INFO: 9422 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:01:27,959 INFO: 9423 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-25 10:01:47,467 DEBUG: 9422 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "c66228eb2ac7: Pulling fs layer", > "a98c7da29d65: Pulling fs layer", > "c66228eb2ac7: Waiting", > "c4603b657b73: Pulling fs layer", > "c4603b657b73: Waiting", > "121ab4741000: Verifying Checksum", > "121ab4741000: Download complete", > "c66228eb2ac7: Verifying Checksum", > "c66228eb2ac7: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "e0f71f706c2a: Download complete", > "a98c7da29d65: Verifying Checksum", > "a98c7da29d65: Download complete", > "c4603b657b73: Verifying Checksum", > "c4603b657b73: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "c66228eb2ac7: Pull complete", > "a98c7da29d65: Pull complete", > "c4603b657b73: Pull complete", > "Digest: sha256:632f29598f1ea7b96a5573d0b5a942b3a1f571783804cdc07dac0910e97d1a87", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:01:47,471 DEBUG: 9422 -- NET_HOST enabled", > "2018-06-25 10:01:47,471 DEBUG: 9422 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift_ringbuilder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpYXk5rN:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:01:51,858 DEBUG: 9421 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-placement-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-placement-api", > "0e3031608420: Pulling fs layer", > "dd9c4679b681: Pulling fs layer", > "0e3031608420: Waiting", > "dd9c4679b681: Waiting", > "dd9c4679b681: Verifying Checksum", > "dd9c4679b681: Download complete", > "0e3031608420: Verifying Checksum", > "0e3031608420: Download complete", > "0e3031608420: Pull complete", > "dd9c4679b681: Pull complete", > "Digest: sha256:2336d644bd74c35fe7e050376f6d7a1b718ae6faf3556cf63917aceecdf581b6", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-25 10:01:51,863 DEBUG: 9421 -- NET_HOST enabled", > "2018-06-25 10:01:51,863 DEBUG: 9421 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_placement --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config --env NAME=nova_placement --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpAZaVwp:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-25 10:01:54,633 DEBUG: 9423 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-api", > "64612d8109ce: Pulling fs layer", > "2d8b51759f9c: Pulling fs layer", > "2d8b51759f9c: Waiting", > "64612d8109ce: Waiting", > "2d8b51759f9c: Verifying Checksum", > "2d8b51759f9c: Download complete", > "64612d8109ce: Download complete", > "64612d8109ce: Pull complete", > "2d8b51759f9c: Pull complete", > "Digest: sha256:0824e3fa2c22ac0acb43883a29cce2fbdf54a9cce722e559cc5c6325e46c2142", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-25 10:01:54,637 DEBUG: 9423 -- NET_HOST enabled", > "2018-06-25 10:01:54,637 DEBUG: 9423 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-gnocchi --env PUPPET_TAGS=file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config --env NAME=gnocchi --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpDcWtMt:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-25 10:02:02,602 DEBUG: 9422 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.09 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[fetch_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'", > "Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.15:%PORT%/d1]/Ring_object_device[172.17.4.15:6000/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.15:%PORT%/d1]/Ring_container_device[172.17.4.15:6001/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.15:%PORT%/d1]/Ring_account_device[172.17.4.15:6002/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[upload_swift_ring_tarball]: Triggered 'refresh' from 2 events", > "Notice: Applied catalog in 4.68 seconds", > "Changes:", > " Total: 11", > "Events:", > " Success: 11", > "Resources:", > " Changed: 11", > " Out of sync: 11", > " Skipped: 19", > " Total: 36", > " Restarted: 6", > "Time:", > " File: 0.00", > " Ring container device: 0.58", > " Ring object device: 0.59", > " Ring account device: 0.59", > " Config retrieval: 1.26", > " Exec: 1.42", > " Last run: 1529920921", > " Total: 4.45", > "Version:", > " Config: 1529920915", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-25 10:01:47.795074371 +0000", > "2018-06-25 10:02:02,602 DEBUG: 9422 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball'", > "+ origin_of_time=/var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ touch /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/ringbuilder.pp\", 113]:[\"/etc/config.pp\", 2]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/ringbuilder/create.pp\", 44]:", > "Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta", > "Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted", > "Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ rsync_srcs+=' /var/www'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift_ringbuilder", > "++ stat -c %y /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:01:47.795074371 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift_ringbuilder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift_ringbuilder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift_ringbuilder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift_ringbuilder --mtime=1970-01-01", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift_ringbuilder --mtime=1970-01-01", > "2018-06-25 10:02:02,603 INFO: 9422 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-06-25 10:02:02,663 DEBUG: 9422 -- docker-puppet-swift_ringbuilder", > "2018-06-25 10:02:02,663 INFO: 9422 -- Finished processing puppet configs for swift_ringbuilder", > "2018-06-25 10:02:02,664 INFO: 9422 -- Starting configuration of sahara using image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-25 10:02:02,664 DEBUG: 9422 -- config_volume sahara", > "2018-06-25 10:02:02,664 DEBUG: 9422 -- puppet_tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-25 10:02:02,664 DEBUG: 9422 -- manifest include ::tripleo::profile::base::sahara::api", > "include ::tripleo::profile::base::sahara::engine", > "2018-06-25 10:02:02,664 DEBUG: 9422 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-25 10:02:02,664 DEBUG: 9422 -- volumes []", > "2018-06-25 10:02:02,665 INFO: 9422 -- Removing container: docker-puppet-sahara", > "2018-06-25 10:02:02,755 INFO: 9422 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-25 10:02:05,250 DEBUG: 9422 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-api", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "6c5f7e9a0fe8: Pulling fs layer", > "5f67eb984180: Pulling fs layer", > "5f67eb984180: Verifying Checksum", > "5f67eb984180: Download complete", > "6c5f7e9a0fe8: Download complete", > "6c5f7e9a0fe8: Pull complete", > "5f67eb984180: Pull complete", > "Digest: sha256:702a41a4d211978832441c041a232227b3d2484d71ef01a8bf7d5332091587a5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-25 10:02:05,253 DEBUG: 9422 -- NET_HOST enabled", > "2018-06-25 10:02:05,253 DEBUG: 9422 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-sahara --env PUPPET_TAGS=file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template --env NAME=sahara --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpnZc5l8:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-25 10:02:07,058 DEBUG: 9423 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.89 seconds", > "Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'", > "Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'", > "Notice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'", > "Notice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'", > "Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'", > "Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{md5}fa95c477a2085c1f7f17ee5f8eccfb90'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Gnocchi::Db/Gnocchi_config[indexer/url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/auth_mode]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage/Gnocchi_config[storage/coordination_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/redis_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_keyring]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_pool]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_conffile]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/workers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/metric_processing_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/resource_id]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/archive_policy_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/flush_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Policy/Oslo::Policy[gnocchi_config]/Gnocchi_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Oslo::Middleware[gnocchi_config]/Gnocchi_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}992f5bf69193d1ee7658b21d7c237ef4'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}c6d1bc1fdbcb93bbd2596e4703f4108c' to '{md5}ac42062d69afa9d2671492ce0be87b7b'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'", > "Notice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'", > "Notice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'", > "Notice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'", > "Notice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'", > "Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'", > "Notice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'", > "Notice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'", > "Notice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'", > "Notice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'", > "Notice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'", > "Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'", > "Notice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'", > "Notice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'", > "Notice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'", > "Notice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'", > "Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'", > "Notice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'", > "Notice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'", > "Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'", > "Notice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'", > "Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{md5}c7726ef20347ef9a06ef68eeaad79765'", > "Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{md5}e282ac9f82fe5538692a4de3616fb695'", > "Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{md5}ab31a6ea611785f74851b578572e4157'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d/httpd.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed", > "Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/content: content changed '{md5}9e163ce201541f8aa36fcc1a372ed34d' to '{md5}b6f6f2773db25c777f1db887e7a3f57d'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[/var/www/cgi-bin/gnocchi]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[gnocchi_wsgi]/ensure: defined content as '{md5}c03530dd30d25ec70b705e0c2f43df7a'", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/Apache::Vhost[gnocchi_wsgi]/Concat[10-gnocchi_wsgi.conf]/File[/etc/httpd/conf.d/10-gnocchi_wsgi.conf]/ensure: defined content as '{md5}234645e33288333ddc889182f196404f'", > "Notice: Applied catalog in 1.08 seconds", > " Total: 110", > " Success: 110", > " Changed: 110", > " Out of sync: 110", > " Total: 253", > " Skipped: 42", > " Resources: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Concat fragment: 0.00", > " Augeas: 0.02", > " Gnocchi config: 0.25", > " File: 0.29", > " Last run: 1529920925", > " Config retrieval: 4.44", > " Total: 5.01", > " Config: 1529920919", > "Gathering files modified after 2018-06-25 10:01:54.852125663 +0000", > "2018-06-25 10:02:07,058 DEBUG: 9423 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config'", > "+ origin_of_time=/var/lib/config-data/gnocchi.origin_of_time", > "+ touch /var/lib/config-data/gnocchi.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config /etc/config.pp", > "Warning: ModuleLoader: module 'gnocchi' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/db.pp\", 26]:[\"/etc/puppet/modules/gnocchi/manifests/init.pp\", 54]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/config.pp\", 29]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/gnocchi.pp\", 31]", > "Warning: Scope(Class[Gnocchi::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/gnocchi", > "++ stat -c %y /var/lib/config-data/gnocchi.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:01:54.852125663 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/gnocchi", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/gnocchi", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/gnocchi.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/gnocchi --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/gnocchi --mtime=1970-01-01", > "2018-06-25 10:02:07,059 INFO: 9423 -- Removing container: docker-puppet-gnocchi", > "2018-06-25 10:02:07,124 DEBUG: 9423 -- docker-puppet-gnocchi", > "2018-06-25 10:02:07,125 INFO: 9423 -- Finished processing puppet configs for gnocchi", > "2018-06-25 10:02:07,125 INFO: 9423 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:02:07,125 DEBUG: 9423 -- config_volume clustercheck", > "2018-06-25 10:02:07,125 DEBUG: 9423 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-25 10:02:07,125 DEBUG: 9423 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-06-25 10:02:07,125 DEBUG: 9423 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:02:07,126 DEBUG: 9423 -- volumes []", > "2018-06-25 10:02:07,126 INFO: 9423 -- Removing container: docker-puppet-clustercheck", > "2018-06-25 10:02:07,199 INFO: 9423 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:02:11,445 DEBUG: 9421 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.33 seconds", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}3f955f02fd456b1b595c3d9ee3364991'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}4f4a183c938febd2cc45010c4dc17e90'", > "Notice: Applied catalog in 7.50 seconds", > " Total: 132", > " Success: 132", > " Changed: 132", > " Out of sync: 132", > " Total: 371", > " Skipped: 39", > " Package: 0.10", > " File: 0.46", > " Total: 11.94", > " Last run: 1529920929", > " Config retrieval: 5.03", > " Nova config: 6.32", > " Config: 1529920917", > "Gathering files modified after 2018-06-25 10:01:52.079105677 +0000", > "2018-06-25 10:02:11,445 DEBUG: 9421 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova_placement.origin_of_time", > "+ touch /var/lib/config-data/nova_placement.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Scope(Class[Nova::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova_placement", > "++ stat -c %y /var/lib/config-data/nova_placement.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:01:52.079105677 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_placement", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_placement", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova_placement.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_placement --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_placement --mtime=1970-01-01", > "2018-06-25 10:02:11,446 INFO: 9421 -- Removing container: docker-puppet-nova_placement", > "2018-06-25 10:02:11,501 DEBUG: 9421 -- docker-puppet-nova_placement", > "2018-06-25 10:02:11,501 INFO: 9421 -- Finished processing puppet configs for nova_placement", > "2018-06-25 10:02:11,502 INFO: 9421 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-25 10:02:11,502 DEBUG: 9421 -- config_volume aodh", > "2018-06-25 10:02:11,502 DEBUG: 9421 -- puppet_tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config", > "2018-06-25 10:02:11,502 DEBUG: 9421 -- manifest include tripleo::profile::base::aodh::api", > "include tripleo::profile::base::aodh::evaluator", > "include tripleo::profile::base::aodh::listener", > "include tripleo::profile::base::aodh::notifier", > "2018-06-25 10:02:11,502 DEBUG: 9421 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-25 10:02:11,502 DEBUG: 9421 -- volumes []", > "2018-06-25 10:02:11,502 INFO: 9421 -- Removing container: docker-puppet-aodh", > "2018-06-25 10:02:11,570 INFO: 9421 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-25 10:02:13,678 DEBUG: 9423 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-mariadb ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-mariadb", > "2ee1f6a99b58: Pulling fs layer", > "2ee1f6a99b58: Verifying Checksum", > "2ee1f6a99b58: Download complete", > "2ee1f6a99b58: Pull complete", > "Digest: sha256:2a886d2154594b405341b26bdc272a2796459d288a4fde8b2ee6f5ca253f6792", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:02:13,681 DEBUG: 9423 -- NET_HOST enabled", > "2018-06-25 10:02:13,681 DEBUG: 9423 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-clustercheck --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpLYuY20:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:02:13,787 DEBUG: 9421 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-api", > "cb7d08d4cc0c: Pulling fs layer", > "6e57c8911d7b: Pulling fs layer", > "6e57c8911d7b: Verifying Checksum", > "6e57c8911d7b: Download complete", > "cb7d08d4cc0c: Verifying Checksum", > "cb7d08d4cc0c: Download complete", > "cb7d08d4cc0c: Pull complete", > "6e57c8911d7b: Pull complete", > "Digest: sha256:fa189b1bb39e6c29a0fe5a6e824ae0f89206ba6749e373e719edac2129e0ff6b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-25 10:02:13,791 DEBUG: 9421 -- NET_HOST enabled", > "2018-06-25 10:02:13,791 DEBUG: 9421 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-aodh --env PUPPET_TAGS=file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config --env NAME=aodh --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpxNa6wp:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-25 10:02:16,055 DEBUG: 9422 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.05 seconds", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/plugins]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/port]/ensure: created", > "Notice: /Stage[main]/Sahara::Service::Api/Sahara_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Policy/Oslo::Policy[sahara_config]/Sahara_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Default[sahara_config]/Sahara_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Rabbit[sahara_config]/Sahara_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Zmq[sahara_config]/Sahara_config[DEFAULT/rpc_zmq_host]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 1.64 seconds", > " Total: 25", > " Success: 25", > " Total: 196", > " Skipped: 23", > " Out of sync: 25", > " Changed: 25", > " Package: 0.05", > " Sahara config: 1.28", > " Last run: 1529920935", > " Config retrieval: 2.36", > " Total: 3.71", > " Config: 1529920931", > "Gathering files modified after 2018-06-25 10:02:05.461200158 +0000", > "2018-06-25 10:02:16,055 DEBUG: 9422 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template'", > "+ origin_of_time=/var/lib/config-data/sahara.origin_of_time", > "+ touch /var/lib/config-data/sahara.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template /etc/config.pp", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/db.pp\", 69]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 380]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 381]", > "Warning: Scope(Class[Sahara]): The use_neutron parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Sahara]): sahara::admin_user, sahara::admin_password, sahara::auth_uri, sahara::identity_uri, sahara::admin_tenant_name and sahara::memcached_servers are deprecated. Please use sahara::keystone::authtoken::* parameters instead.", > "Warning: Scope(Class[Sahara::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/sahara", > "++ stat -c %y /var/lib/config-data/sahara.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:05.461200158 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/sahara", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/sahara", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/sahara.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/sahara --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/sahara --mtime=1970-01-01", > "2018-06-25 10:02:16,056 INFO: 9422 -- Removing container: docker-puppet-sahara", > "2018-06-25 10:02:16,097 DEBUG: 9422 -- docker-puppet-sahara", > "2018-06-25 10:02:16,098 INFO: 9422 -- Finished processing puppet configs for sahara", > "2018-06-25 10:02:16,098 INFO: 9422 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:02:16,098 DEBUG: 9422 -- config_volume mysql", > "2018-06-25 10:02:16,098 DEBUG: 9422 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-25 10:02:16,098 DEBUG: 9422 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "2018-06-25 10:02:16,098 DEBUG: 9422 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:02:16,099 DEBUG: 9422 -- volumes []", > "2018-06-25 10:02:16,099 INFO: 9422 -- Removing container: docker-puppet-mysql", > "2018-06-25 10:02:16,151 INFO: 9422 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:02:16,155 DEBUG: 9422 -- NET_HOST enabled", > "2018-06-25 10:02:16,155 DEBUG: 9422 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-mysql --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpWbb76s:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-25 10:02:20,572 DEBUG: 9423 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.46 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}8734d516a175f8982e07d93b6d783a9f'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{md5}5b7a8ca5a29342fca15fa9d3f2a38cd0'", > "Notice: Applied catalog in 0.05 seconds", > " Total: 4", > " Success: 4", > " Total: 13", > " Out of sync: 3", > " Changed: 3", > " Skipped: 9", > " File: 0.03", > " Config retrieval: 0.63", > " Total: 0.65", > " Last run: 1529920939", > " Config: 1529920939", > "Gathering files modified after 2018-06-25 10:02:13.880257121 +0000", > "2018-06-25 10:02:20,572 DEBUG: 9423 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,file ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,file'", > "+ origin_of_time=/var/lib/config-data/clustercheck.origin_of_time", > "+ touch /var/lib/config-data/clustercheck.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,file /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/clustercheck", > "++ stat -c %y /var/lib/config-data/clustercheck.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:13.880257121 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/clustercheck", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/clustercheck", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/clustercheck.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/clustercheck --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/clustercheck --mtime=1970-01-01", > "2018-06-25 10:02:20,573 INFO: 9423 -- Removing container: docker-puppet-clustercheck", > "2018-06-25 10:02:20,610 DEBUG: 9423 -- docker-puppet-clustercheck", > "2018-06-25 10:02:20,610 INFO: 9423 -- Finished processing puppet configs for clustercheck", > "2018-06-25 10:02:20,610 INFO: 9423 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-25 10:02:20,610 DEBUG: 9423 -- config_volume redis", > "2018-06-25 10:02:20,610 DEBUG: 9423 -- puppet_tags file,file_line,concat,augeas,cron,exec", > "2018-06-25 10:02:20,610 DEBUG: 9423 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-06-25 10:02:20,610 DEBUG: 9423 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-25 10:02:20,610 DEBUG: 9423 -- volumes []", > "2018-06-25 10:02:20,611 INFO: 9423 -- Removing container: docker-puppet-redis", > "2018-06-25 10:02:20,674 INFO: 9423 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-25 10:02:24,260 DEBUG: 9423 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-redis ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-redis", > "13055d264df1: Pulling fs layer", > "dfc35b833f61: Pulling fs layer", > "13055d264df1: Verifying Checksum", > "13055d264df1: Download complete", > "13055d264df1: Pull complete", > "dfc35b833f61: Verifying Checksum", > "dfc35b833f61: Download complete", > "dfc35b833f61: Pull complete", > "Digest: sha256:7782f917270ad46f451fe06063a6adb53afe9d81474a7af374ed7b9c09d1b055", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-25 10:02:24,264 DEBUG: 9423 -- NET_HOST enabled", > "2018-06-25 10:02:24,264 DEBUG: 9423 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-redis --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec --env NAME=redis --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpQgv0Ht:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-25 10:02:27,059 DEBUG: 9421 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.80 seconds", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/gnocchi_external_project_owner]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/host]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/port]/ensure: created", > "Notice: /Stage[main]/Aodh::Evaluator/Aodh_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Db/Oslo::Db[aodh_config]/Aodh_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Rabbit[aodh_config]/Aodh_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Default[aodh_config]/Aodh_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Policy/Oslo::Policy[aodh_config]/Aodh_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Oslo::Middleware[aodh_config]/Aodh_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}ec2d2b160929cab612d6eb4785535674'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/owner: owner changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/group: group changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[aodh_wsgi]/ensure: defined content as '{md5}09d823939c45501c11f2096289fe70cf'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/Apache::Vhost[aodh_wsgi]/Concat[10-aodh_wsgi.conf]/File[/etc/httpd/conf.d/10-aodh_wsgi.conf]/ensure: defined content as '{md5}57150841d8c05b0c1598cd820cf594bd'", > "Notice: Applied catalog in 2.01 seconds", > " Total: 112", > " Success: 112", > " Changed: 111", > " Out of sync: 111", > " Total: 331", > " Skipped: 40", > " File: 0.39", > " Aodh config: 0.87", > " Last run: 1529920945", > " Total: 5.78", > "Gathering files modified after 2018-06-25 10:02:14.711262641 +0000", > "2018-06-25 10:02:27,059 DEBUG: 9421 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config'", > "+ origin_of_time=/var/lib/config-data/aodh.origin_of_time", > "+ touch /var/lib/config-data/aodh.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config /etc/config.pp", > "Warning: ModuleLoader: module 'aodh' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/aodh.pp\", 123]", > "Warning: Scope(Class[Aodh::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/oslo/manifests/db.pp\", 140]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/aodh", > "++ stat -c %y /var/lib/config-data/aodh.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:14.711262641 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/aodh", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/aodh", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/aodh.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/aodh --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/aodh --mtime=1970-01-01", > "2018-06-25 10:02:27,059 INFO: 9421 -- Removing container: docker-puppet-aodh", > "2018-06-25 10:02:27,099 DEBUG: 9422 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.34 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}69e032b0df155d12050294dfc6f40434'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}81f3beaae0e1273fed025adfd903c277'", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}e673fa5f672212d789e5a45ebfc5b712'", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Notice: Applied catalog in 0.34 seconds", > " Skipped: 225", > " Total: 230", > " Out of sync: 4", > " Changed: 4", > " Config retrieval: 4.77", > " Total: 4.80", > " Config: 1529920940", > "Gathering files modified after 2018-06-25 10:02:16.355273512 +0000", > "2018-06-25 10:02:27,099 DEBUG: 9422 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/mysql.origin_of_time", > "+ touch /var/lib/config-data/mysql.origin_of_time", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 58]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'panko' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/mysql", > "++ stat -c %y /var/lib/config-data/mysql.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:16.355273512 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/mysql", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/mysql", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/mysql.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/mysql --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/mysql --mtime=1970-01-01", > "2018-06-25 10:02:27,099 INFO: 9422 -- Removing container: docker-puppet-mysql", > "2018-06-25 10:02:27,108 DEBUG: 9421 -- docker-puppet-aodh", > "2018-06-25 10:02:27,108 INFO: 9421 -- Finished processing puppet configs for aodh", > "2018-06-25 10:02:27,109 INFO: 9421 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:02:27,109 DEBUG: 9421 -- config_volume heat_api", > "2018-06-25 10:02:27,109 DEBUG: 9421 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-25 10:02:27,109 DEBUG: 9421 -- manifest include ::tripleo::profile::base::heat::api", > "2018-06-25 10:02:27,110 DEBUG: 9421 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:02:27,110 DEBUG: 9421 -- volumes []", > "2018-06-25 10:02:27,110 INFO: 9421 -- Removing container: docker-puppet-heat_api", > "2018-06-25 10:02:27,144 DEBUG: 9422 -- docker-puppet-mysql", > "2018-06-25 10:02:27,144 INFO: 9422 -- Finished processing puppet configs for mysql", > "2018-06-25 10:02:27,144 INFO: 9422 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-25 10:02:27,144 DEBUG: 9422 -- config_volume nova", > "2018-06-25 10:02:27,144 DEBUG: 9422 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config", > "2018-06-25 10:02:27,145 DEBUG: 9422 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::conductor", > "include tripleo::profile::base::nova::consoleauth", > "include tripleo::profile::base::nova::scheduler", > "include tripleo::profile::base::nova::vncproxy", > "2018-06-25 10:02:27,145 DEBUG: 9422 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-25 10:02:27,145 DEBUG: 9422 -- volumes []", > "2018-06-25 10:02:27,145 INFO: 9422 -- Removing container: docker-puppet-nova", > "2018-06-25 10:02:27,186 INFO: 9421 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:02:27,209 INFO: 9422 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-25 10:02:28,575 DEBUG: 9422 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-api", > "0e3031608420: Already exists", > "b32f33ab1345: Pulling fs layer", > "b32f33ab1345: Download complete", > "b32f33ab1345: Pull complete", > "Digest: sha256:98f38e1deb6081bcc8d18a914af693593a06823741381f71dacd158824ef18f8", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-25 10:02:28,578 DEBUG: 9422 -- NET_HOST enabled", > "2018-06-25 10:02:28,578 DEBUG: 9422 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config --env NAME=nova --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpepSpPB:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-25 10:02:30,101 DEBUG: 9421 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api", > "15497368e843: Pulling fs layer", > "a91507f6d5dc: Pulling fs layer", > "a91507f6d5dc: Verifying Checksum", > "a91507f6d5dc: Download complete", > "15497368e843: Verifying Checksum", > "15497368e843: Download complete", > "15497368e843: Pull complete", > "a91507f6d5dc: Pull complete", > "Digest: sha256:7e8eb4cb5943296bd67f2e22c40a7519d3c71f8533541c54da0c9f5ef6b361ce", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:02:30,104 DEBUG: 9421 -- NET_HOST enabled", > "2018-06-25 10:02:30,105 DEBUG: 9421 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpKD9mje:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:02:31,685 DEBUG: 9423 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.00 seconds", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}6152d9c54fa9e5398c46445d29df2eaf'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 0.06 seconds", > " Total: 6", > " Success: 6", > " Restarted: 1", > " Skipped: 11", > " Total: 21", > " Out of sync: 6", > " Changed: 6", > " Exec: 0.00", > " Augeas: 0.01", > " File: 0.01", > " Config retrieval: 1.16", > " Total: 1.18", > " Last run: 1529920950", > " Config: 1529920949", > "Gathering files modified after 2018-06-25 10:02:24.451326040 +0000", > "2018-06-25 10:02:31,685 DEBUG: 9423 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,exec ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec'", > "+ origin_of_time=/var/lib/config-data/redis.origin_of_time", > "+ touch /var/lib/config-data/redis.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec /etc/config.pp", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/redis", > "++ stat -c %y /var/lib/config-data/redis.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:24.451326040 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/redis", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/redis", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/redis.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/redis --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/redis --mtime=1970-01-01", > "2018-06-25 10:02:31,686 INFO: 9423 -- Removing container: docker-puppet-redis", > "2018-06-25 10:02:31,722 DEBUG: 9423 -- docker-puppet-redis", > "2018-06-25 10:02:31,722 INFO: 9423 -- Finished processing puppet configs for redis", > "2018-06-25 10:02:31,722 INFO: 9423 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-25 10:02:31,723 DEBUG: 9423 -- config_volume keystone", > "2018-06-25 10:02:31,723 DEBUG: 9423 -- puppet_tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config", > "2018-06-25 10:02:31,723 DEBUG: 9423 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "2018-06-25 10:02:31,723 DEBUG: 9423 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-25 10:02:31,723 DEBUG: 9423 -- volumes []", > "2018-06-25 10:02:31,723 INFO: 9423 -- Removing container: docker-puppet-keystone", > "2018-06-25 10:02:31,789 INFO: 9423 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-25 10:02:34,205 DEBUG: 9423 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-keystone ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-keystone", > "6222a19b9ac2: Pulling fs layer", > "900dd421e68b: Pulling fs layer", > "900dd421e68b: Verifying Checksum", > "900dd421e68b: Download complete", > "6222a19b9ac2: Verifying Checksum", > "6222a19b9ac2: Download complete", > "6222a19b9ac2: Pull complete", > "900dd421e68b: Pull complete", > "Digest: sha256:5aaa5a4237af74f89ed31c8ff7e97414693ecfb9ce82bcb13f238c1a96030dc5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-25 10:02:34,209 DEBUG: 9423 -- NET_HOST enabled", > "2018-06-25 10:02:34,209 DEBUG: 9423 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-keystone --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpRRAm0B:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-25 10:02:42,774 DEBUG: 9421 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.44 seconds", > "Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created", > "Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}27f73485d14b169a5662aab15e99251e'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}387b9a45eb94d3588782048aedd8875a'", > "Notice: Applied catalog in 2.48 seconds", > " Total: 121", > " Success: 121", > " Changed: 121", > " Out of sync: 121", > " Skipped: 32", > " Total: 335", > " Cron: 0.01", > " File: 0.37", > " Heat config: 1.48", > " Last run: 1529920961", > " Config retrieval: 4.05", > " Total: 5.96", > " Config: 1529920954", > "Gathering files modified after 2018-06-25 10:02:30.305363002 +0000", > "2018-06-25 10:02:42,775 DEBUG: 9421 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,heat_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/heat_api.origin_of_time", > "+ touch /var/lib/config-data/heat_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/db.pp\", 75]:[\"/etc/puppet/modules/heat/manifests/init.pp\", 363]", > "Warning: Scope(Class[Heat::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/heat.pp\", 134]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api", > "++ stat -c %y /var/lib/config-data/heat_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:30.305363002 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api --mtime=1970-01-01", > "2018-06-25 10:02:42,775 INFO: 9421 -- Removing container: docker-puppet-heat_api", > "2018-06-25 10:02:42,826 DEBUG: 9421 -- docker-puppet-heat_api", > "2018-06-25 10:02:42,826 INFO: 9421 -- Finished processing puppet configs for heat_api", > "2018-06-25 10:02:42,827 INFO: 9421 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:02:42,827 DEBUG: 9421 -- config_volume heat", > "2018-06-25 10:02:42,827 DEBUG: 9421 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-25 10:02:42,827 DEBUG: 9421 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-06-25 10:02:42,827 DEBUG: 9421 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:02:42,827 DEBUG: 9421 -- volumes []", > "2018-06-25 10:02:42,828 INFO: 9421 -- Removing container: docker-puppet-heat", > "2018-06-25 10:02:42,881 INFO: 9421 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:02:42,885 DEBUG: 9421 -- NET_HOST enabled", > "2018-06-25 10:02:42,885 DEBUG: 9421 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpPTBtMP:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-25 10:02:46,955 DEBUG: 9423 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.70 seconds", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{md5}b1af96e09797628c6062dc68f46871a3'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{md5}0079ff08919d05593b86544afef978dd'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{md5}bef2a8c1de490f3885955c81579548e5'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{md5}fd56c9b59723bbc1df9e6017ec754983'", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone::Config/Keystone_config[ec2/driver]/ensure: created", > "Notice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}cc8628c09ba01ff8517fb018063d57e4'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}a4dfc96e853fd828d1958f1564705a89'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}1af33fdc0f20b2e8f2f56c2796d09dbf'", > " Total: 122", > " Success: 122", > " Changed: 122", > " Out of sync: 122", > " Total: 320", > " Skipped: 34", > " File: 0.36", > " Keystone config: 1.46", > " Last run: 1529920965", > " Config retrieval: 4.30", > " Total: 6.19", > " Config: 1529920958", > "Gathering files modified after 2018-06-25 10:02:34.399388353 +0000", > "2018-06-25 10:02:46,955 DEBUG: 9423 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config'", > "+ origin_of_time=/var/lib/config-data/keystone.origin_of_time", > "+ touch /var/lib/config-data/keystone.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/keystone/manifests/init.pp\", 757]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 760]:[\"/etc/config.pp\", 3]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 1108]:[\"/etc/config.pp\", 3]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/keystone", > "++ stat -c %y /var/lib/config-data/keystone.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:34.399388353 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/keystone", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/keystone", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/keystone.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/keystone --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/keystone --mtime=1970-01-01", > "2018-06-25 10:02:46,956 INFO: 9423 -- Removing container: docker-puppet-keystone", > "2018-06-25 10:02:47,006 DEBUG: 9423 -- docker-puppet-keystone", > "2018-06-25 10:02:47,006 INFO: 9423 -- Finished processing puppet configs for keystone", > "2018-06-25 10:02:47,006 INFO: 9423 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-25 10:02:47,006 DEBUG: 9423 -- config_volume memcached", > "2018-06-25 10:02:47,007 DEBUG: 9423 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-25 10:02:47,007 DEBUG: 9423 -- manifest include ::tripleo::profile::base::memcached", > "2018-06-25 10:02:47,007 DEBUG: 9423 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-25 10:02:47,007 DEBUG: 9423 -- volumes []", > "2018-06-25 10:02:47,007 INFO: 9423 -- Removing container: docker-puppet-memcached", > "2018-06-25 10:02:47,070 INFO: 9423 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-25 10:02:48,462 DEBUG: 9423 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-memcached ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-memcached", > "ca902f72935a: Pulling fs layer", > "ca902f72935a: Verifying Checksum", > "ca902f72935a: Download complete", > "ca902f72935a: Pull complete", > "Digest: sha256:d1285a1e78900b5c0c58e5c03f624e46f6b871ff4ffa9d972ef012568a9f1046", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-25 10:02:48,466 DEBUG: 9423 -- NET_HOST enabled", > "2018-06-25 10:02:48,466 DEBUG: 9423 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-memcached --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpuNe0nV:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-25 10:02:51,776 DEBUG: 9422 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.72 seconds", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}c1c365631e5369abcf3ba94aa9832b23'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created", > "Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}d874feea66376f67edef221b7bf4bb98'", > "Notice: Applied catalog in 10.19 seconds", > " Total: 180", > " Success: 180", > " Changed: 180", > " Out of sync: 180", > " Total: 501", > " Skipped: 75", > " Cron: 0.03", > " Package: 0.09", > " Total: 14.76", > " Last run: 1529920969", > " Config retrieval: 5.42", > " Nova config: 8.82", > " Config: 1529920953", > "Gathering files modified after 2018-06-25 10:02:28.783353474 +0000", > "2018-06-25 10:02:51,777 DEBUG: 9422 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova.origin_of_time", > "+ touch /var/lib/config-data/nova.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/nova/manifests/scheduler/filter.pp\", 150]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/scheduler.pp\", 32]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova", > "++ stat -c %y /var/lib/config-data/nova.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:28.783353474 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova --mtime=1970-01-01", > "2018-06-25 10:02:51,777 INFO: 9422 -- Removing container: docker-puppet-nova", > "2018-06-25 10:02:51,824 DEBUG: 9422 -- docker-puppet-nova", > "2018-06-25 10:02:51,824 INFO: 9422 -- Finished processing puppet configs for nova", > "2018-06-25 10:02:51,825 INFO: 9422 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:02:51,825 DEBUG: 9422 -- config_volume iscsid", > "2018-06-25 10:02:51,825 DEBUG: 9422 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-06-25 10:02:51,825 DEBUG: 9422 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-25 10:02:51,825 DEBUG: 9422 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:02:51,825 DEBUG: 9422 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-25 10:02:51,825 INFO: 9422 -- Removing container: docker-puppet-iscsid", > "2018-06-25 10:02:51,893 INFO: 9422 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:02:52,527 DEBUG: 9422 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "ab4eae34093d: Pulling fs layer", > "ab4eae34093d: Verifying Checksum", > "ab4eae34093d: Download complete", > "ab4eae34093d: Pull complete", > "Digest: sha256:a46aa93fee87b0f173118da5c2a18dc271772adb839a481ec07f2a53534ac53c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:02:52,531 DEBUG: 9422 -- NET_HOST enabled", > "2018-06-25 10:02:52,531 DEBUG: 9422 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpgeDZ5l:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-25 10:02:53,662 DEBUG: 9421 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.22 seconds", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created", > "Notice: Applied catalog in 1.85 seconds", > " Total: 48", > " Success: 48", > " Skipped: 21", > " Total: 223", > " Out of sync: 48", > " Changed: 48", > " Heat config: 1.60", > " Last run: 1529920972", > " Config retrieval: 2.57", > " Total: 4.24", > " Config: 1529920968", > "Gathering files modified after 2018-06-25 10:02:43.068440719 +0000", > "2018-06-25 10:02:53,662 DEBUG: 9421 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat.origin_of_time", > "+ touch /var/lib/config-data/heat.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat", > "++ stat -c %y /var/lib/config-data/heat.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:43.068440719 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat --mtime=1970-01-01", > "2018-06-25 10:02:53,662 INFO: 9421 -- Removing container: docker-puppet-heat", > "2018-06-25 10:02:53,700 DEBUG: 9421 -- docker-puppet-heat", > "2018-06-25 10:02:53,700 INFO: 9421 -- Finished processing puppet configs for heat", > "2018-06-25 10:02:53,701 INFO: 9421 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-25 10:02:53,701 DEBUG: 9421 -- config_volume cinder", > "2018-06-25 10:02:53,701 DEBUG: 9421 -- puppet_tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line", > "2018-06-25 10:02:53,701 DEBUG: 9421 -- manifest include ::tripleo::profile::base::cinder::api", > "include ::tripleo::profile::base::cinder::backup::ceph", > "include ::tripleo::profile::base::cinder::scheduler", > "include ::tripleo::profile::base::lvm", > "2018-06-25 10:02:53,701 DEBUG: 9421 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-25 10:02:53,701 DEBUG: 9421 -- volumes []", > "2018-06-25 10:02:53,702 INFO: 9421 -- Removing container: docker-puppet-cinder", > "2018-06-25 10:02:53,766 INFO: 9421 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-25 10:02:54,569 DEBUG: 9423 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.60 seconds", > "Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}e91f0fbb7d29cd8d7207e0329b1f1d65'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d/memcached.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: Applied catalog in 0.10 seconds", > " Total: 3", > " Success: 3", > " Skipped: 10", > " Config retrieval: 0.70", > " Total: 0.73", > " Last run: 1529920973", > " Config: 1529920973", > "Gathering files modified after 2018-06-25 10:02:48.666473601 +0000", > "2018-06-25 10:02:54,569 DEBUG: 9423 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/memcached.origin_of_time", > "+ touch /var/lib/config-data/memcached.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/memcached", > "++ stat -c %y /var/lib/config-data/memcached.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:48.666473601 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/memcached", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/memcached", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/memcached.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/memcached --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/memcached --mtime=1970-01-01", > "2018-06-25 10:02:54,569 INFO: 9423 -- Removing container: docker-puppet-memcached", > "2018-06-25 10:02:54,608 DEBUG: 9423 -- docker-puppet-memcached", > "2018-06-25 10:02:54,608 INFO: 9423 -- Finished processing puppet configs for memcached", > "2018-06-25 10:02:54,609 INFO: 9423 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-25 10:02:54,609 DEBUG: 9423 -- config_volume panko", > "2018-06-25 10:02:54,609 DEBUG: 9423 -- puppet_tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config", > "2018-06-25 10:02:54,609 DEBUG: 9423 -- manifest include tripleo::profile::base::panko::api", > "2018-06-25 10:02:54,609 DEBUG: 9423 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-25 10:02:54,609 DEBUG: 9423 -- volumes []", > "2018-06-25 10:02:54,609 INFO: 9423 -- Removing container: docker-puppet-panko", > "2018-06-25 10:02:54,679 INFO: 9423 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-25 10:02:57,211 DEBUG: 9423 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-panko-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-panko-api", > "e67be68e6dd6: Pulling fs layer", > "37e4d86c7a37: Pulling fs layer", > "37e4d86c7a37: Verifying Checksum", > "37e4d86c7a37: Download complete", > "e67be68e6dd6: Verifying Checksum", > "e67be68e6dd6: Download complete", > "e67be68e6dd6: Pull complete", > "37e4d86c7a37: Pull complete", > "Digest: sha256:af7f2810620f1617a589387bcde33173bbf96ee4d0ea85e34d70bdfd83328d21", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-25 10:02:57,214 DEBUG: 9423 -- NET_HOST enabled", > "2018-06-25 10:02:57,214 DEBUG: 9423 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-panko --env PUPPET_TAGS=file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config --env NAME=panko --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpjBjjZ5:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-25 10:02:58,814 DEBUG: 9422 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.58 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > "Notice: Applied catalog in 0.20 seconds", > " Total: 2", > " Success: 2", > " Total: 10", > " Out of sync: 2", > " Changed: 2", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.73", > " Total: 0.76", > " Last run: 1529920978", > " Config: 1529920977", > "Gathering files modified after 2018-06-25 10:02:52.738497075 +0000", > "2018-06-25 10:02:58,814 DEBUG: 9422 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:52.738497075 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-06-25 10:02:58,814 INFO: 9422 -- Removing container: docker-puppet-iscsid", > "2018-06-25 10:02:58,854 DEBUG: 9422 -- docker-puppet-iscsid", > "2018-06-25 10:02:58,854 INFO: 9422 -- Finished processing puppet configs for iscsid", > "2018-06-25 10:02:58,854 INFO: 9422 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-25 10:02:58,855 DEBUG: 9422 -- config_volume glance_api", > "2018-06-25 10:02:58,855 DEBUG: 9422 -- puppet_tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-06-25 10:02:58,855 DEBUG: 9422 -- manifest include ::tripleo::profile::base::glance::api", > "2018-06-25 10:02:58,855 DEBUG: 9422 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-25 10:02:58,855 DEBUG: 9422 -- volumes []", > "2018-06-25 10:02:58,855 INFO: 9422 -- Removing container: docker-puppet-glance_api", > "2018-06-25 10:02:58,922 INFO: 9422 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-25 10:03:01,963 DEBUG: 9421 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-api", > "5e7b63a88a76: Pulling fs layer", > "56e05018c234: Pulling fs layer", > "56e05018c234: Verifying Checksum", > "56e05018c234: Download complete", > "5e7b63a88a76: Verifying Checksum", > "5e7b63a88a76: Download complete", > "5e7b63a88a76: Pull complete", > "56e05018c234: Pull complete", > "Digest: sha256:183deb2657acebac30853e0973dad9bbf1f1f1288cff99eeb24fb4ae2fc7b1d3", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-25 10:03:01,966 DEBUG: 9421 -- NET_HOST enabled", > "2018-06-25 10:03:01,966 DEBUG: 9421 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-cinder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGHeE5Z:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-25 10:03:04,763 DEBUG: 9422 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-glance-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-glance-api", > "a5deab52212a: Pulling fs layer", > "8b31454e1757: Pulling fs layer", > "8b31454e1757: Verifying Checksum", > "8b31454e1757: Download complete", > "a5deab52212a: Verifying Checksum", > "a5deab52212a: Download complete", > "a5deab52212a: Pull complete", > "8b31454e1757: Pull complete", > "Digest: sha256:266d9d00d90cc84effdabd7cad9bea244a8fb918a029a3d2bafa4e2af9a72e77", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-25 10:03:04,767 DEBUG: 9422 -- NET_HOST enabled", > "2018-06-25 10:03:04,767 DEBUG: 9422 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-glance_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config --env NAME=glance_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp9HbEJt:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-25 10:03:09,627 DEBUG: 9423 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.46 seconds", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/host]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/port]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/workers]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_api_paste_ini[pipeline:main/pipeline]/ensure: created", > "Notice: /Stage[main]/Panko::Expirer/Cron[panko-expirer]/ensure: created", > "Notice: /Stage[main]/Panko::Logging/Oslo::Log[panko_config]/Panko_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Panko::Db/Oslo::Db[panko_config]/Panko_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Panko::Policy/Oslo::Policy[panko_config]/Panko_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Oslo::Middleware[panko_config]/Panko_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}990ae0895a6782f16f3453e2208a2287'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[/var/www/cgi-bin/panko]/ensure: created", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[panko_wsgi]/ensure: defined content as '{md5}e6f446b6267321fd2251a3e83021181a'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/Apache::Vhost[panko_wsgi]/Concat[10-panko_wsgi.conf]/File[/etc/httpd/conf.d/10-panko_wsgi.conf]/ensure: defined content as '{md5}c25bc09404806f5d4e05d60427def1f7'", > "Notice: Applied catalog in 1.09 seconds", > " Total: 101", > " Success: 101", > " Changed: 101", > " Out of sync: 101", > " Total: 255", > " Panko api paste ini: 0.00", > " Panko config: 0.20", > " File: 0.35", > " Last run: 1529920988", > " Total: 4.68", > " Config: 1529920983", > "Gathering files modified after 2018-06-25 10:02:57.418523595 +0000", > "2018-06-25 10:03:09,628 DEBUG: 9423 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config'", > "+ origin_of_time=/var/lib/config-data/panko.origin_of_time", > "+ touch /var/lib/config-data/panko.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko.pp\", 32]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/db.pp\", 59]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko/api.pp\", 83]", > "Warning: Scope(Class[Panko::Api]): This Class is deprecated and will be removed in future releases.", > "Warning: Scope(Class[Panko::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/panko", > "++ stat -c %y /var/lib/config-data/panko.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:02:57.418523595 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/panko", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/panko", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/panko.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/panko --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/panko --mtime=1970-01-01", > "2018-06-25 10:03:09,628 INFO: 9423 -- Removing container: docker-puppet-panko", > "2018-06-25 10:03:09,678 DEBUG: 9423 -- docker-puppet-panko", > "2018-06-25 10:03:09,678 INFO: 9423 -- Finished processing puppet configs for panko", > "2018-06-25 10:03:09,679 INFO: 9423 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:03:09,679 DEBUG: 9423 -- config_volume crond", > "2018-06-25 10:03:09,679 DEBUG: 9423 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-25 10:03:09,679 DEBUG: 9423 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-25 10:03:09,679 DEBUG: 9423 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:03:09,679 DEBUG: 9423 -- volumes []", > "2018-06-25 10:03:09,680 INFO: 9423 -- Removing container: docker-puppet-crond", > "2018-06-25 10:03:09,742 INFO: 9423 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:03:10,242 DEBUG: 9423 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Download complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:03:10,246 DEBUG: 9423 -- NET_HOST enabled", > "2018-06-25 10:03:10,246 DEBUG: 9423 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpuj8hkm:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-25 10:03:15,457 DEBUG: 9422 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.20 seconds", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created", > "Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 2.36 seconds", > " Total: 44", > " Success: 44", > " Out of sync: 44", > " Changed: 44", > " Skipped: 59", > " Glance cache config: 0.13", > " Glance api config: 1.99", > " Last run: 1529920994", > " Config retrieval: 2.51", > " Total: 4.71", > " Config: 1529920989", > "Gathering files modified after 2018-06-25 10:03:04.989565489 +0000", > "2018-06-25 10:03:15,457 DEBUG: 9422 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config'", > "+ origin_of_time=/var/lib/config-data/glance_api.origin_of_time", > "+ touch /var/lib/config-data/glance_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/config.pp\", 48]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/glance/api.pp\", 202]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/api/db.pp\", 69]:[\"/etc/puppet/modules/glance/manifests/api.pp\", 371]", > "Warning: Unknown variable: 'default_store_real'. at /etc/puppet/modules/glance/manifests/api.pp:438:9", > "Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to http", > "Warning: Scope(Class[Glance::Api::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/glance_api", > "++ stat -c %y /var/lib/config-data/glance_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:04.989565489 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/glance_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/glance_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/glance_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/glance_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/glance_api --mtime=1970-01-01", > "2018-06-25 10:03:15,457 INFO: 9422 -- Removing container: docker-puppet-glance_api", > "2018-06-25 10:03:15,492 DEBUG: 9422 -- docker-puppet-glance_api", > "2018-06-25 10:03:15,492 INFO: 9422 -- Finished processing puppet configs for glance_api", > "2018-06-25 10:03:15,492 INFO: 9422 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-25 10:03:15,493 DEBUG: 9422 -- config_volume rabbitmq", > "2018-06-25 10:03:15,493 DEBUG: 9422 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-25 10:03:15,493 DEBUG: 9422 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "2018-06-25 10:03:15,493 DEBUG: 9422 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-25 10:03:15,493 DEBUG: 9422 -- volumes []", > "2018-06-25 10:03:15,493 INFO: 9422 -- Removing container: docker-puppet-rabbitmq", > "2018-06-25 10:03:15,556 INFO: 9422 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-25 10:03:16,071 DEBUG: 9423 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.44 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > " Skipped: 7", > " Total: 9", > " Config retrieval: 0.53", > " Total: 0.55", > " Last run: 1529920995", > " Config: 1529920994", > "Gathering files modified after 2018-06-25 10:03:10.445594918 +0000", > "2018-06-25 10:03:16,071 DEBUG: 9423 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:10.445594918 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-25 10:03:16,071 INFO: 9423 -- Removing container: docker-puppet-crond", > "2018-06-25 10:03:16,111 DEBUG: 9423 -- docker-puppet-crond", > "2018-06-25 10:03:16,111 INFO: 9423 -- Finished processing puppet configs for crond", > "2018-06-25 10:03:16,111 INFO: 9423 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-25 10:03:16,111 DEBUG: 9423 -- config_volume haproxy", > "2018-06-25 10:03:16,111 DEBUG: 9423 -- puppet_tags file,file_line,concat,augeas,cron,haproxy_config", > "2018-06-25 10:03:16,111 DEBUG: 9423 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "2018-06-25 10:03:16,111 DEBUG: 9423 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-25 10:03:16,112 DEBUG: 9423 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-06-25 10:03:16,112 INFO: 9423 -- Removing container: docker-puppet-haproxy", > "2018-06-25 10:03:16,186 INFO: 9423 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-25 10:03:19,193 DEBUG: 9421 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.98 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}756f90f14cc82a3c98c5b9003bd05deb'", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/ensure: created", > "Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/nova_catalog_info]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created", > "Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{md5}870efbe437d63cd260287cd36472d7b1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{md5}7f25f62ad8d718fb52fb97b9776221ee'", > "Notice: Applied catalog in 5.29 seconds", > " Total: 134", > " Success: 134", > " Changed: 134", > " Out of sync: 134", > " Skipped: 36", > " Total: 374", > " File line: 0.00", > " Augeas: 0.65", > " Last run: 1529920997", > " Cinder config: 3.54", > " Config retrieval: 4.61", > " Total: 9.23", > " Config: 1529920987", > "Gathering files modified after 2018-06-25 10:03:02.188550133 +0000", > "2018-06-25 10:03:19,194 DEBUG: 9421 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/cinder.origin_of_time", > "+ touch /var/lib/config-data/cinder.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/api.pp\", 203]:[\"/etc/config.pp\", 2]", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_admin_info parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/cinder", > "++ stat -c %y /var/lib/config-data/cinder.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:02.188550133 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/cinder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/cinder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/cinder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/cinder --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/cinder --mtime=1970-01-01", > "2018-06-25 10:03:19,194 INFO: 9421 -- Removing container: docker-puppet-cinder", > "2018-06-25 10:03:19,314 DEBUG: 9421 -- docker-puppet-cinder", > "2018-06-25 10:03:19,314 INFO: 9421 -- Finished processing puppet configs for cinder", > "2018-06-25 10:03:19,315 INFO: 9421 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:03:19,315 DEBUG: 9421 -- config_volume swift", > "2018-06-25 10:03:19,315 DEBUG: 9421 -- puppet_tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-06-25 10:03:19,315 DEBUG: 9421 -- manifest include ::tripleo::profile::base::swift::proxy", > "include ::tripleo::profile::base::swift::storage", > "2018-06-25 10:03:19,315 DEBUG: 9421 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:03:19,315 DEBUG: 9421 -- volumes []", > "2018-06-25 10:03:19,316 INFO: 9421 -- Removing container: docker-puppet-swift", > "2018-06-25 10:03:19,369 INFO: 9421 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:03:19,373 DEBUG: 9421 -- NET_HOST enabled", > "2018-06-25 10:03:19,373 DEBUG: 9421 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpI6o3sX:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-25 10:03:20,714 DEBUG: 9422 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-rabbitmq ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-rabbitmq", > "e603d701fd04: Pulling fs layer", > "e603d701fd04: Verifying Checksum", > "e603d701fd04: Download complete", > "e603d701fd04: Pull complete", > "Digest: sha256:4e07b8b4fd82b69e2a7ba105447776e730b0dd8fffa70a2f13c5c0e612b1ccdc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-25 10:03:20,718 DEBUG: 9422 -- NET_HOST enabled", > "2018-06-25 10:03:20,718 DEBUG: 9422 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-rabbitmq --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=rabbitmq --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGEr_Dg:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-25 10:03:20,823 DEBUG: 9423 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-haproxy ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-haproxy", > "a82042577283: Pulling fs layer", > "a82042577283: Verifying Checksum", > "a82042577283: Download complete", > "a82042577283: Pull complete", > "Digest: sha256:79a7901cc6403d11b4e7f6978d7e99a1879972ccb61f430f5660695c8683d7a0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-25 10:03:20,826 DEBUG: 9423 -- NET_HOST enabled", > "2018-06-25 10:03:20,827 DEBUG: 9423 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-haproxy --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpvOZ2Bj:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/ipa/ca.crt:/etc/ipa/ca.crt:ro --volume /etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro --volume /etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro --volume /etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-25 10:03:29,360 DEBUG: 9423 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.30 seconds", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}80fa14cff3b060e06166a1de84ae95e8'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.31 seconds", > " Changed: 1", > " Out of sync: 1", > " Total: 76", > " File: 0.09", > " Last run: 1529921008", > " Config retrieval: 2.53", > " Total: 2.62", > " Config: 1529921005", > "Gathering files modified after 2018-06-25 10:03:21.035650289 +0000", > "2018-06-25 10:03:29,360 DEBUG: 9423 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/haproxy", > "++ stat -c %y /var/lib/config-data/haproxy.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:21.035650289 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/haproxy", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/haproxy", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/haproxy.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/haproxy --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/haproxy --mtime=1970-01-01", > "2018-06-25 10:03:29,360 INFO: 9423 -- Removing container: docker-puppet-haproxy", > "2018-06-25 10:03:29,407 DEBUG: 9423 -- docker-puppet-haproxy", > "2018-06-25 10:03:29,407 INFO: 9423 -- Finished processing puppet configs for haproxy", > "2018-06-25 10:03:29,407 INFO: 9423 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:03:29,407 DEBUG: 9423 -- config_volume ceilometer", > "2018-06-25 10:03:29,407 DEBUG: 9423 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config", > "2018-06-25 10:03:29,407 DEBUG: 9423 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-06-25 10:03:29,407 DEBUG: 9423 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:03:29,408 DEBUG: 9423 -- volumes []", > "2018-06-25 10:03:29,408 INFO: 9423 -- Removing container: docker-puppet-ceilometer", > "2018-06-25 10:03:29,472 INFO: 9423 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:03:30,154 DEBUG: 9421 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.90 seconds", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.12:11211'", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created", > "Notice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/rsync]/ensure: defined content as '{md5}dedcebd7bbd584496b9021d2989da44a'", > "Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}2a57ac98b09ccd97ca05d1843118aa4b'", > "Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to 'iyOvVNKNp4Se1OrQ2PluiPpRp'", > "Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to 'auto'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.12:11211'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed 'admin, SwiftOperator' to 'admin, swiftoperator, ResellerAdmin'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'", > "Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{md5}1ec4c5d341469878967c80eeea4a39bc'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{md5}1a1d9c6301828fc14ec77443206d0eec'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{md5}789bec691d88905e6e6a6ef6940a83ff'", > "Notice: Applied catalog in 0.69 seconds", > " Total: 97", > " Success: 97", > " Total: 192", > " Skipped: 37", > " Out of sync: 97", > " Changed: 97", > " Swift config: 0.00", > " Swift keymaster config: 0.01", > " Swift object expirer config: 0.01", > " File: 0.04", > " Swift proxy config: 0.27", > " Config retrieval: 2.37", > " Total: 2.71", > "Gathering files modified after 2018-06-25 10:03:19.580642814 +0000", > "2018-06-25 10:03:30,155 DEBUG: 9421 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server'", > "+ origin_of_time=/var/lib/config-data/swift.origin_of_time", > "+ touch /var/lib/config-data/swift.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 147]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 163]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 165]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > "Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56", > "Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56", > "Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56", > "Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56", > "Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release", > "Warning: Class 'xinetd' is already defined at /etc/config.pp:6; cannot redefine at /etc/puppet/modules/xinetd/manifests/init.pp:12", > "Warning: Unknown variable: 'xinetd::params::default_user'. at /etc/puppet/modules/xinetd/manifests/service.pp:110:14", > "Warning: Unknown variable: 'xinetd::params::default_group'. at /etc/puppet/modules/xinetd/manifests/service.pp:116:15", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:161:13", > "Warning: Unknown variable: 'xinetd::service_name'. at /etc/puppet/modules/xinetd/manifests/service.pp:166:24", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:167:21", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 183]:", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 197]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift", > "++ stat -c %y /var/lib/config-data/swift.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:19.580642814 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift --mtime=1970-01-01", > "2018-06-25 10:03:30,155 INFO: 9421 -- Removing container: docker-puppet-swift", > "2018-06-25 10:03:30,193 DEBUG: 9421 -- docker-puppet-swift", > "2018-06-25 10:03:30,193 INFO: 9421 -- Finished processing puppet configs for swift", > "2018-06-25 10:03:30,194 INFO: 9421 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-25 10:03:30,194 DEBUG: 9421 -- config_volume heat_api_cfn", > "2018-06-25 10:03:30,194 DEBUG: 9421 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-25 10:03:30,194 DEBUG: 9421 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-06-25 10:03:30,194 DEBUG: 9421 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-25 10:03:30,194 DEBUG: 9421 -- volumes []", > "2018-06-25 10:03:30,195 INFO: 9421 -- Removing container: docker-puppet-heat_api_cfn", > "2018-06-25 10:03:30,263 INFO: 9421 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-25 10:03:30,899 DEBUG: 9421 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn", > "15497368e843: Already exists", > "4089b2a1d02c: Pulling fs layer", > "4089b2a1d02c: Download complete", > "4089b2a1d02c: Pull complete", > "Digest: sha256:bbcf3cc8eeb6d8910642b40cfa9fe544a33bee49cfb4512abe49c5bf176ed8f0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-25 10:03:30,902 DEBUG: 9421 -- NET_HOST enabled", > "2018-06-25 10:03:30,902 DEBUG: 9421 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api_cfn --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpQ3nDom:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-25 10:03:31,995 DEBUG: 9423 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "333aa6b2b383: Pulling fs layer", > "1eb9ef5adcb4: Pulling fs layer", > "333aa6b2b383: Verifying Checksum", > "333aa6b2b383: Download complete", > "1eb9ef5adcb4: Verifying Checksum", > "1eb9ef5adcb4: Download complete", > "333aa6b2b383: Pull complete", > "1eb9ef5adcb4: Pull complete", > "Digest: sha256:3f638e03aaf1d7e303183e06ff1627a5a0efeaef228a7be1e9667ae62d7d6a1b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:03:31,999 DEBUG: 9423 -- NET_HOST enabled", > "2018-06-25 10:03:31,999 DEBUG: 9423 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpv1Ghtj:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-25 10:03:32,805 DEBUG: 9422 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.07 seconds", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}7c5cf5bed5504668815fc0e555e57c66'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}ebe8fcd83a98e09c5651db7925e2dd8b'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > " Total: 12", > " Success: 12", > " Total: 19", > " Out of sync: 9", > " Changed: 9", > " Config retrieval: 1.23", > " Total: 1.27", > " Last run: 1529921011", > " Config: 1529921010", > "Gathering files modified after 2018-06-25 10:03:20.928649740 +0000", > "2018-06-25 10:03:32,806 DEBUG: 9422 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/rabbitmq.origin_of_time", > "+ touch /var/lib/config-data/rabbitmq.origin_of_time", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/rabbitmq", > "++ stat -c %y /var/lib/config-data/rabbitmq.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:20.928649740 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/rabbitmq", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/rabbitmq", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/rabbitmq.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/rabbitmq --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/rabbitmq --mtime=1970-01-01", > "2018-06-25 10:03:32,806 INFO: 9422 -- Removing container: docker-puppet-rabbitmq", > "2018-06-25 10:03:32,850 DEBUG: 9422 -- docker-puppet-rabbitmq", > "2018-06-25 10:03:32,850 INFO: 9422 -- Finished processing puppet configs for rabbitmq", > "2018-06-25 10:03:32,850 INFO: 9422 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:03:32,850 DEBUG: 9422 -- config_volume neutron", > "2018-06-25 10:03:32,850 DEBUG: 9422 -- puppet_tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-25 10:03:32,850 DEBUG: 9422 -- manifest include tripleo::profile::base::neutron::server", > "include ::tripleo::profile::base::neutron::plugins::ml2", > "include tripleo::profile::base::neutron::dhcp", > "include tripleo::profile::base::neutron::l3", > "include tripleo::profile::base::neutron::metadata", > "include ::tripleo::profile::base::neutron::ovs", > "2018-06-25 10:03:32,850 DEBUG: 9422 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:03:32,850 DEBUG: 9422 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-25 10:03:32,851 INFO: 9422 -- Removing container: docker-puppet-neutron", > "2018-06-25 10:03:32,922 INFO: 9422 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:03:37,273 DEBUG: 9422 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "ea1d509b6f44: Pulling fs layer", > "e9f9993bb931: Pulling fs layer", > "e9f9993bb931: Download complete", > "ea1d509b6f44: Verifying Checksum", > "ea1d509b6f44: Download complete", > "ea1d509b6f44: Pull complete", > "e9f9993bb931: Pull complete", > "Digest: sha256:af12594500608f07f8d38590e2c9b2983e5d81ae8b63aec042f36411b0e76adc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:03:37,277 DEBUG: 9422 -- NET_HOST enabled", > "2018-06-25 10:03:37,277 DEBUG: 9422 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpoNyN8Y:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-25 10:03:39,864 DEBUG: 9423 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.39 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/metering_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/File[event_pipeline]/ensure: defined content as '{md5}dafea5c96d5da5251f9b8a275c6d71aa'", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.62 seconds", > " Total: 31", > " Success: 31", > " Total: 158", > " Out of sync: 31", > " Changed: 31", > " Skipped: 35", > " Ceilometer config: 0.51", > " Config retrieval: 1.62", > " Last run: 1529921018", > " Total: 2.13", > " Config: 1529921016", > "Gathering files modified after 2018-06-25 10:03:32.229706380 +0000", > "2018-06-25 10:03:39,864 DEBUG: 9423 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/agent/notification.pp\", 118]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer/agent/notification.pp\", 34]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:32.229706380 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-06-25 10:03:39,864 INFO: 9423 -- Removing container: docker-puppet-ceilometer", > "2018-06-25 10:03:39,909 DEBUG: 9423 -- docker-puppet-ceilometer", > "2018-06-25 10:03:39,910 INFO: 9423 -- Finished processing puppet configs for ceilometer", > "2018-06-25 10:03:44,288 DEBUG: 9421 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.84 seconds", > "Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}f58b93e49dcf0941699fec4407cf9012'", > "Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}724c1dd97fe30e10b772a4eef7eecbf9'", > "Notice: Applied catalog in 2.37 seconds", > " Total: 337", > " File: 0.22", > " Heat config: 1.45", > " Last run: 1529921022", > " Config retrieval: 4.36", > " Total: 6.09", > "Gathering files modified after 2018-06-25 10:03:31.115700907 +0000", > "2018-06-25 10:03:44,288 DEBUG: 9421 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat_api_cfn.origin_of_time", > "+ touch /var/lib/config-data/heat_api_cfn.origin_of_time", > " with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp\", 125]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api_cfn", > "++ stat -c %y /var/lib/config-data/heat_api_cfn.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:31.115700907 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api_cfn", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api_cfn", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api_cfn.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api_cfn --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api_cfn --mtime=1970-01-01", > "2018-06-25 10:03:44,288 INFO: 9421 -- Removing container: docker-puppet-heat_api_cfn", > "2018-06-25 10:03:44,343 DEBUG: 9421 -- docker-puppet-heat_api_cfn", > "2018-06-25 10:03:44,343 INFO: 9421 -- Finished processing puppet configs for heat_api_cfn", > "2018-06-25 10:03:49,333 DEBUG: 9422 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.40 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 1.56 seconds", > " Total: 107", > " Success: 107", > " Changed: 107", > " Out of sync: 107", > " Total: 359", > " Skipped: 44", > " Neutron api config: 0.00", > " Neutron l3 agent config: 0.01", > " Neutron agent ovs: 0.01", > " Neutron metadata agent config: 0.02", > " Neutron plugin ml2: 0.03", > " Package: 0.04", > " Neutron dhcp agent config: 0.08", > " Neutron config: 1.12", > " Last run: 1529921028", > " Config retrieval: 3.80", > " Total: 5.14", > " Config: 1529921022", > "Gathering files modified after 2018-06-25 10:03:37.485731884 +0000", > "2018-06-25 10:03:49,333 DEBUG: 9422 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 530]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/server.pp\", 104]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 132]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/db.pp\", 69]:[\"/etc/puppet/modules/neutron/manifests/server.pp\", 315]", > "Warning: Scope(Class[Neutron::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: '::neutron::params::metadata_agent_package'. at /etc/puppet/modules/neutron/manifests/agents/metadata.pp:122:6", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:37.485731884 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-06-25 10:03:49,333 INFO: 9422 -- Removing container: docker-puppet-neutron", > "2018-06-25 10:03:49,376 DEBUG: 9422 -- docker-puppet-neutron", > "2018-06-25 10:03:49,376 INFO: 9422 -- Finished processing puppet configs for neutron", > "2018-06-25 10:03:49,377 INFO: 9422 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-25 10:03:49,377 DEBUG: 9422 -- config_volume horizon", > "2018-06-25 10:03:49,377 DEBUG: 9422 -- puppet_tags file,file_line,concat,augeas,cron,horizon_config", > "2018-06-25 10:03:49,377 DEBUG: 9422 -- manifest include ::tripleo::profile::base::horizon", > "2018-06-25 10:03:49,377 DEBUG: 9422 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-25 10:03:49,377 DEBUG: 9422 -- volumes []", > "2018-06-25 10:03:49,377 INFO: 9422 -- Removing container: docker-puppet-horizon", > "2018-06-25 10:03:49,437 INFO: 9422 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-25 10:03:54,546 DEBUG: 9422 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-horizon ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-horizon", > "76e0e41ffb2e: Pulling fs layer", > "76e0e41ffb2e: Download complete", > "76e0e41ffb2e: Pull complete", > "Digest: sha256:985bc1250661a931ac3368fe39a6651116c123db6c18789bfdb7da2c61741b0d", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-25 10:03:54,549 DEBUG: 9422 -- NET_HOST enabled", > "2018-06-25 10:03:54,549 DEBUG: 9422 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-horizon --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp2GlS7D:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-25 10:04:04,040 DEBUG: 9422 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.21 seconds", > "Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{md5}9fb2db37853f227d1c8929fa9832baf0'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}8637b6278050cef1484e0f4bfce8ddab'", > "Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{md5}118eb7518a1d018a162d23dfe32c4bad'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}601e633104479c5b9ee828b4bae911ac' to '{md5}6d34c398151d0843ea367381536242d0'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/owner: owner changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{md5}f54a7e47dcdd22af2973ae45425438bf'", > "Notice: Applied catalog in 0.72 seconds", > " Total: 86", > " Success: 86", > " Total: 172", > " Out of sync: 84", > " Changed: 84", > " Last run: 1529921043", > " Config retrieval: 2.58", > " Total: 2.80", > " Config: 1529921039", > "Gathering files modified after 2018-06-25 10:03:54.725811964 +0000", > "2018-06-25 10:04:04,041 DEBUG: 9422 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,horizon_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,horizon_config'", > "+ origin_of_time=/var/lib/config-data/horizon.origin_of_time", > "+ touch /var/lib/config-data/horizon.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,horizon_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp\", 97]:[\"/etc/config.pp\", 2]", > "Warning: ModuleLoader: module 'horizon' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Undefined variable ''; ", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 559]:[\"/etc/config.pp\", 2]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 560]:[\"/etc/config.pp\", 2]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 562]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/horizon", > "++ stat -c %y /var/lib/config-data/horizon.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-25 10:03:54.725811964 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/horizon", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/horizon", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/horizon.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/horizon --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/horizon --mtime=1970-01-01", > "2018-06-25 10:04:04,041 INFO: 9422 -- Removing container: docker-puppet-horizon", > "2018-06-25 10:04:04,089 DEBUG: 9422 -- docker-puppet-horizon", > "2018-06-25 10:04:04,089 INFO: 9422 -- Finished processing puppet configs for horizon", > "2018-06-25 10:04:04,090 DEBUG: 9420 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-25 10:04:04,090 DEBUG: 9420 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-25 10:04:04,093 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-06-25 10:04:04,094 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-25 10:04:04,094 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-25 10:04:04,094 DEBUG: 9420 -- Updating config hash for mysql_bootstrap, config_volume=heat_api_cfn hash=674e3d91f6c95223f910efb0ee5fbc3d", > "2018-06-25 10:04:04,094 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-25 10:04:04,094 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-25 10:04:04,094 DEBUG: 9420 -- Updating config hash for rabbitmq_bootstrap, config_volume=heat_api_cfn hash=2b7ad56015f3db44a35a149b06b117d4", > "2018-06-25 10:04:04,094 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-06-25 10:04:04,097 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-06-25 10:04:04,097 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-06-25 10:04:04,097 DEBUG: 9420 -- Updating config hash for nova_placement, config_volume=heat_api_cfn hash=c833fb68dba1cf8b46fefcd56ff77767", > "2018-06-25 10:04:04,097 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-25 10:04:04,097 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-25 10:04:04,097 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/heat/etc/heat.md5sum for config_volume /var/lib/config-data/heat/etc/heat", > "2018-06-25 10:04:04,097 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/heat/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/heat/etc/my.cnf.d", > "2018-06-25 10:04:04,098 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data.md5sum for config_volume /var/lib/config-data", > "2018-06-25 10:04:04,098 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/swift/etc", > "2018-06-25 10:04:04,098 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-25 10:04:04,098 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-25 10:04:04,098 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-25 10:04:04,098 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-25 10:04:04,098 DEBUG: 9420 -- Updating config hash for keystone_cron, config_volume=heat_api_cfn hash=58baf5c6ac7a1ab4bfbf8e418ba88b2b", > "2018-06-25 10:04:04,098 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/panko/etc.md5sum for config_volume /var/lib/config-data/panko/etc", > "2018-06-25 10:04:04,098 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/panko/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/panko/etc/my.cnf.d", > "2018-06-25 10:04:04,099 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-25 10:04:04,099 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-25 10:04:04,099 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-25 10:04:04,099 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-25 10:04:04,099 DEBUG: 9420 -- Updating config hash for keystone_db_sync, config_volume=heat_api_cfn hash=58baf5c6ac7a1ab4bfbf8e418ba88b2b", > "2018-06-25 10:04:04,099 DEBUG: 9420 -- Updating config hash for keystone, config_volume=heat_api_cfn hash=58baf5c6ac7a1ab4bfbf8e418ba88b2b", > "2018-06-25 10:04:04,099 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/aodh/etc/aodh.md5sum for config_volume /var/lib/config-data/aodh/etc/aodh", > "2018-06-25 10:04:04,099 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/aodh/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/aodh/etc/my.cnf.d", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Updating config hash for neutron_ovs_bridge, config_volume=heat_api_cfn hash=b9f48bfad1b8c3c13252c74a99ce5d4a", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Updating config hash for glance_api_db_sync, config_volume=heat_api_cfn hash=c6b17dfebd49714e8e248d990784c5a8", > "2018-06-25 10:04:04,100 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/neutron/etc.md5sum for config_volume /var/lib/config-data/neutron/etc", > "2018-06-25 10:04:04,101 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/neutron/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/neutron/etc/my.cnf.d", > "2018-06-25 10:04:04,101 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/neutron/usr/share.md5sum for config_volume /var/lib/config-data/neutron/usr/share", > "2018-06-25 10:04:04,101 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/sahara/etc/sahara.md5sum for config_volume /var/lib/config-data/sahara/etc/sahara", > "2018-06-25 10:04:04,101 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-06-25 10:04:04,101 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-06-25 10:04:04,101 DEBUG: 9420 -- Updating config hash for horizon, config_volume=heat_api_cfn hash=37383244b772fb0288e505093e04a49f", > "2018-06-25 10:04:04,103 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-06-25 10:04:04,103 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-06-25 10:04:04,103 DEBUG: 9420 -- Updating config hash for clustercheck, config_volume=heat_api_cfn hash=ab57589722de1a773a34c435437d5861", > "2018-06-25 10:04:04,103 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-25 10:04:04,103 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Updating config hash for mysql_restart_bundle, config_volume=heat_api_cfn hash=674e3d91f6c95223f910efb0ee5fbc3d", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Updating config hash for haproxy_restart_bundle, config_volume=heat_api_cfn hash=87adb44af329208c5a4f43e38dfa8cc5", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Updating config hash for rabbitmq_restart_bundle, config_volume=heat_api_cfn hash=2b7ad56015f3db44a35a149b06b117d4", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon/etc", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-06-25 10:04:04,104 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-06-25 10:04:04,105 DEBUG: 9420 -- Updating config hash for redis_restart_bundle, config_volume=heat_api_cfn hash=fdc3440a3d2612f9126cc6eb17c4a5aa", > "2018-06-25 10:04:04,106 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,106 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,106 DEBUG: 9420 -- Updating config hash for cinder_volume_restart_bundle, config_volume=heat_api_cfn hash=088298229463a8e8ff039df47d04ea21", > "2018-06-25 10:04:04,106 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-25 10:04:04,107 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-25 10:04:04,107 DEBUG: 9420 -- Updating config hash for gnocchi_statsd, config_volume=heat_api_cfn hash=a26cc2d55406e84fe49eeb1dd7c7e244", > "2018-06-25 10:04:04,107 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,107 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,107 DEBUG: 9420 -- Updating config hash for cinder_backup_restart_bundle, config_volume=heat_api_cfn hash=088298229463a8e8ff039df47d04ea21", > "2018-06-25 10:04:04,107 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-25 10:04:04,107 DEBUG: 9420 -- Updating config hash for gnocchi_metricd, config_volume=heat_api_cfn hash=a26cc2d55406e84fe49eeb1dd7c7e244", > "2018-06-25 10:04:04,107 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-25 10:04:04,108 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-25 10:04:04,108 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-25 10:04:04,108 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/ceilometer/etc/ceilometer.md5sum for config_volume /var/lib/config-data/ceilometer/etc/ceilometer", > "2018-06-25 10:04:04,108 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-25 10:04:04,108 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-25 10:04:04,108 DEBUG: 9420 -- Updating config hash for gnocchi_api, config_volume=heat_api_cfn hash=a26cc2d55406e84fe49eeb1dd7c7e244", > "2018-06-25 10:04:04,110 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,110 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,110 DEBUG: 9420 -- Updating config hash for swift_container_updater, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,110 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-25 10:04:04,110 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-25 10:04:04,110 DEBUG: 9420 -- Updating config hash for aodh_evaluator, config_volume=heat_api_cfn hash=4bb22b7fba2206e8d7f0536b8ba126f2", > "2018-06-25 10:04:04,110 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,110 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Updating config hash for nova_scheduler, config_volume=heat_api_cfn hash=8ec5164d7f2c9a552322d8cedb043822", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Updating config hash for swift_object_server, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Updating config hash for cinder_api, config_volume=heat_api_cfn hash=088298229463a8e8ff039df47d04ea21", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Updating config hash for swift_proxy, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,111 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Updating config hash for neutron_dhcp, config_volume=heat_api_cfn hash=b9f48bfad1b8c3c13252c74a99ce5d4a", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Updating config hash for heat_api, config_volume=heat_api_cfn hash=e1a9abed6f7bab39af90e730bc0739d8", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Updating config hash for swift_object_auditor, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Updating config hash for neutron_metadata_agent, config_volume=heat_api_cfn hash=b9f48bfad1b8c3c13252c74a99ce5d4a", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-25 10:04:04,112 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Updating config hash for ceilometer_agent_central, config_volume=heat_api_cfn hash=aeefc0fc20548c752db55dc280f97ab9", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Updating config hash for swift_account_replicator, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Updating config hash for aodh_notifier, config_volume=heat_api_cfn hash=4bb22b7fba2206e8d7f0536b8ba126f2", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,113 DEBUG: 9420 -- Updating config hash for nova_api_cron, config_volume=heat_api_cfn hash=8ec5164d7f2c9a552322d8cedb043822", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Updating config hash for nova_consoleauth, config_volume=heat_api_cfn hash=8ec5164d7f2c9a552322d8cedb043822", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Updating config hash for gnocchi_db_sync, config_volume=heat_api_cfn hash=a26cc2d55406e84fe49eeb1dd7c7e244", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Updating config hash for swift_account_reaper, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-25 10:04:04,114 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Updating config hash for ceilometer_agent_notification, config_volume=heat_api_cfn hash=aeefc0fc20548c752db55dc280f97ab9-23bcbc4e622ef53df1bafec337f1105b", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Updating config hash for nova_vnc_proxy, config_volume=heat_api_cfn hash=8ec5164d7f2c9a552322d8cedb043822", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Updating config hash for swift_rsync, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,115 DEBUG: 9420 -- Updating config hash for nova_api, config_volume=heat_api_cfn hash=8ec5164d7f2c9a552322d8cedb043822", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Updating config hash for aodh_api, config_volume=heat_api_cfn hash=4bb22b7fba2206e8d7f0536b8ba126f2", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Updating config hash for nova_metadata, config_volume=heat_api_cfn hash=8ec5164d7f2c9a552322d8cedb043822", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Updating config hash for heat_engine, config_volume=heat_api_cfn hash=e623b4b2112a0a4ff927c7d26e273984", > "2018-06-25 10:04:04,116 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Updating config hash for swift_container_server, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Updating config hash for swift_object_replicator, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Updating config hash for neutron_l3_agent, config_volume=heat_api_cfn hash=b9f48bfad1b8c3c13252c74a99ce5d4a", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,117 DEBUG: 9420 -- Updating config hash for cinder_scheduler, config_volume=heat_api_cfn hash=088298229463a8e8ff039df47d04ea21", > "2018-06-25 10:04:04,118 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,118 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-25 10:04:04,118 DEBUG: 9420 -- Updating config hash for nova_conductor, config_volume=heat_api_cfn hash=8ec5164d7f2c9a552322d8cedb043822", > "2018-06-25 10:04:04,118 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-06-25 10:04:04,118 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-06-25 10:04:04,118 DEBUG: 9420 -- Updating config hash for heat_api_cfn, config_volume=heat_api_cfn hash=2643f6820e9176b431a8f69611c23e36", > "2018-06-25 10:04:04,118 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-06-25 10:04:04,118 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-06-25 10:04:04,118 DEBUG: 9420 -- Updating config hash for sahara_api, config_volume=heat_api_cfn hash=403d99341a88a8b2dfb479d0011dc446", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Updating config hash for sahara_engine, config_volume=heat_api_cfn hash=403d99341a88a8b2dfb479d0011dc446", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Updating config hash for neutron_ovs_agent, config_volume=heat_api_cfn hash=b9f48bfad1b8c3c13252c74a99ce5d4a", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Updating config hash for cinder_api_cron, config_volume=heat_api_cfn hash=088298229463a8e8ff039df47d04ea21", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,119 DEBUG: 9420 -- Updating config hash for swift_account_auditor, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,120 DEBUG: 9420 -- Updating config hash for swift_container_replicator, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,120 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,120 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,120 DEBUG: 9420 -- Updating config hash for swift_object_updater, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,120 DEBUG: 9420 -- Updating config hash for swift_object_expirer, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,120 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-25 10:04:04,120 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Updating config hash for heat_api_cron, config_volume=heat_api_cfn hash=e1a9abed6f7bab39af90e730bc0739d8", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Updating config hash for swift_container_auditor, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Updating config hash for panko_api, config_volume=heat_api_cfn hash=23bcbc4e622ef53df1bafec337f1105b", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Updating config hash for aodh_listener, config_volume=heat_api_cfn hash=4bb22b7fba2206e8d7f0536b8ba126f2", > "2018-06-25 10:04:04,121 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Updating config hash for neutron_api, config_volume=heat_api_cfn hash=b9f48bfad1b8c3c13252c74a99ce5d4a", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Updating config hash for swift_account_server, config_volume=heat_api_cfn hash=e521e96e986069ff151a76a7488eff29", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Updating config hash for glance_api, config_volume=heat_api_cfn hash=c6b17dfebd49714e8e248d990784c5a8", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-25 10:04:04,122 DEBUG: 9420 -- Updating config hash for logrotate_crond, config_volume=heat_api_cfn hash=600f21d3b8938ae7bea1a631522521cb" > ] >} >2018-06-25 06:04:05,473 p=25239 u=mistral | TASK [Start containers for step 1] ********************************************* >2018-06-25 06:04:06,155 p=25239 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:04:06,231 p=25239 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:04:34,636 p=25239 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:04:34,662 p=25239 u=mistral | TASK [Debug output for task which failed: Start containers for step 1] ********* >2018-06-25 06:04:34,789 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-backup ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-backup", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "5e7b63a88a76: Already exists", > "89c035649aaf: Pulling fs layer", > "89c035649aaf: Verifying Checksum", > "89c035649aaf: Download complete", > "89c035649aaf: Pull complete", > "Digest: sha256:bbd94b3a8477e286264ef2b5660a8c60d872d945e37c6023ae19c6dd09ea156f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-volume ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-volume", > "606ec38d3d26: Pulling fs layer", > "606ec38d3d26: Download complete", > "606ec38d3d26: Pull complete", > "Digest: sha256:d4d518ef6aad7c077ff97a0ad1de70ef4074ace3ddde85fdfb70e12e63891ea5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", > "stdout: ", > "stdout: 0170262c7022cd05637801b10df58f579dacd21da991dd1ca36581a57f083f2a", > "stdout: Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...", > "OK", > "Filling help tables...", > "Creating OpenGIS required SP-s...", > "To start mysqld at boot time you have to copy", > "support-files/mysql.server to the right place for your system", > "PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !", > "To do so, start the server, then issue the following commands:", > "'/usr/bin/mysqladmin' -u root password 'new-password'", > "'/usr/bin/mysqladmin' -u root -h controller-0 password 'new-password'", > "Alternatively you can run:", > "'/usr/bin/mysql_secure_installation'", > "which will also give you the option of removing the test", > "databases and anonymous user created by default. This is", > "strongly recommended for production servers.", > "See the MariaDB Knowledgebase at http://mariadb.com/kb or the", > "MySQL manual for more instructions.", > "You can start the MariaDB daemon with:", > "cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'", > "You can test the MariaDB daemon with mysql-test-run.pl", > "cd '/usr/mysql-test' ; perl mysql-test-run.pl", > "Please report any problems at http://mariadb.org/jira", > "The latest information about MariaDB is available at http://mariadb.org/.", > "You can find additional information about the MySQL part at:", > "http://dev.mysql.com", > "Consider joining MariaDB's strong and vibrant community:", > "https://mariadb.org/get-involved/", > "180625 10:04:25 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180625 10:04:25 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "spawn mysql_secure_installation", > "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB", > " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", > "In order to log into MariaDB to secure it, we'll need the current", > "password for the root user. If you've just installed MariaDB, and", > "you haven't set the root password yet, the password will be blank,", > "so you should just press enter here.", > "Enter current password for root (enter for none): ", > "OK, successfully used password, moving on...", > "Setting the root password ensures that nobody can log into the MariaDB", > "root user without the proper authorisation.", > "Set root password? [Y/n] y", > "New password: ", > "Re-enter new password: ", > "Password updated successfully!", > "Reloading privilege tables..", > " ... Success!", > "By default, a MariaDB installation has an anonymous user, allowing anyone", > "to log into MariaDB without having to have a user account created for", > "them. This is intended only for testing, and to make the installation", > "go a bit smoother. You should remove them before moving into a", > "production environment.", > "Remove anonymous users? [Y/n] y", > "Normally, root should only be allowed to connect from 'localhost'. This", > "ensures that someone cannot guess at the root password from the network.", > "Disallow root login remotely? [Y/n] n", > " ... skipping.", > "By default, MariaDB comes with a database named 'test' that anyone can", > "access. This is also intended only for testing, and should be removed", > "before moving into a production environment.", > "Remove test database and access to it? [Y/n] y", > " - Dropping test database...", > " - Removing privileges on test database...", > "Reloading the privilege tables will ensure that all changes made so far", > "will take effect immediately.", > "Reload privilege tables now? [Y/n] y", > "Cleaning up...", > "All done! If you've completed all of the above steps, your MariaDB", > "installation should now be secure.", > "Thanks for using MariaDB!", > "180625 10:04:28 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "180625 10:04:29 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180625 10:04:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "mysqld is alive", > "180625 10:04:32 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "stderr: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Copying /dev/null to /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Setting permission for /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Deleting /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/galera.cnf to /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sysconfig/clustercheck to /etc/sysconfig/clustercheck", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/root/.my.cnf to /root/.my.cnf", > "INFO:__main__:Writing out command to execute", > "2018-06-25 10:04:12 139632218847424 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-25 10:04:12 139632218847424 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 42 ...", > "2018-06-25 10:04:17 140315303135424 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-25 10:04:17 140315303135424 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 71 ...", > "2018-06-25 10:04:21 139794026932416 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-25 10:04:21 139794026932416 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 101 ...", > "/usr/bin/mysqld_safe: line 755: ulimit: -1: invalid option", > "ulimit: usage: ulimit [-SHacdefilmnpqrstuvx] [limit]", > "stdout: 4a90d97059049d24c7e4bb0a86c06a47d5e373831ef2b56c8a973ae20758ce79" > ] >} >2018-06-25 06:04:34,805 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-25 06:04:34,819 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-25 06:04:34,845 p=25239 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks1.json exists] ******** >2018-06-25 06:04:35,286 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:04:35,300 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:04:35,319 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:04:35,345 p=25239 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 1] ******************** >2018-06-25 06:04:35,376 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:04:35,409 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:04:35,427 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:04:35,488 p=25239 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 1] *** >2018-06-25 06:04:35,528 p=25239 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,564 p=25239 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,586 p=25239 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,593 p=25239 u=mistral | PLAY [External deployment step 2] ********************************************** >2018-06-25 06:04:35,616 p=25239 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-25 06:04:35,634 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,652 p=25239 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-25 06:04:35,677 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,680 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,684 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,703 p=25239 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-25 06:04:35,723 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,740 p=25239 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-25 06:04:35,761 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,779 p=25239 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-25 06:04:35,796 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,813 p=25239 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-25 06:04:35,835 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,856 p=25239 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-25 06:04:35,876 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,893 p=25239 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-25 06:04:35,910 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:04:35,927 p=25239 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-25 06:04:35,956 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbook_verbosity": 2}, "changed": false} >2018-06-25 06:04:35,973 p=25239 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-25 06:04:36,005 p=25239 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_command": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_RETRY_FILES_ENABLED=False ANSIBLE_SSH_RETRIES=3 ANSIBLE_HOST_KEY_CHECKING=False DEFAULT_FORKS=25 ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ansible-playbook --private-key /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/extra_vars.yml"}, "changed": false} >2018-06-25 06:04:36,021 p=25239 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-25 06:09:19,778 p=25239 u=mistral | changed: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": true, "cmd": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_RETRY_FILES_ENABLED=False ANSIBLE_SSH_RETRIES=3 ANSIBLE_HOST_KEY_CHECKING=False DEFAULT_FORKS=25 ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ansible-playbook --private-key /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/extra_vars.yml /usr/share/ceph-ansible/site-docker.yml.sample", "delta": "0:04:43.369778", "end": "2018-06-25 06:09:19.565162", "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "rc": 0, "start": "2018-06-25 06:04:36.195384", "stderr": "[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \nThis feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is \ndiscouraged. The module documentation details page may explain more about this \nrationale.. This feature will be removed in a future release. Deprecation \nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Could not match supplied host pattern, ignoring: agents\n [WARNING]: Could not match supplied host pattern, ignoring: mdss\n [WARNING]: Could not match supplied host pattern, ignoring: rgws\n [WARNING]: Could not match supplied host pattern, ignoring: nfss\n [WARNING]: Could not match supplied host pattern, ignoring: restapis\n [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors\n [WARNING]: Could not match supplied host pattern, ignoring: iscsigws\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.", "stderr_lines": ["[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use ", "'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. ", "This feature will be removed in a future release. Deprecation warnings can be ", "disabled by setting deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is ", "discouraged. The module documentation details page may explain more about this ", "rationale.. This feature will be removed in a future release. Deprecation ", "warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.", " [WARNING]: Could not match supplied host pattern, ignoring: agents", " [WARNING]: Could not match supplied host pattern, ignoring: mdss", " [WARNING]: Could not match supplied host pattern, ignoring: rgws", " [WARNING]: Could not match supplied host pattern, ignoring: nfss", " [WARNING]: Could not match supplied host pattern, ignoring: restapis", " [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors", " [WARNING]: Could not match supplied host pattern, ignoring: iscsigws", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg."], "stdout": "ansible-playbook 2.5.4\n config file = /usr/share/ceph-ansible/ansible.cfg\n configured module search path = [u'/usr/share/ceph-ansible/library']\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\n executable location = /usr/bin/ansible-playbook\n python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]\nUsing /usr/share/ceph-ansible/ansible.cfg as config file\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml\n\nPLAYBOOK: site-docker.yml.sample ***********************************************\n12 plays in /usr/share/ceph-ansible/site-docker.yml.sample\n\nPLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***\n\nTASK [gather facts] ************************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:24\nMonday 25 June 2018 06:04:38 -0400 (0:00:00.185) 0:00:00.185 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [gather and delegate facts] ***********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:29\nMonday 25 June 2018 06:04:38 -0400 (0:00:00.079) 0:00:00.265 *********** \nok: [controller-0 -> 192.168.24.13] => (item=compute-0)\nok: [controller-0 -> 192.168.24.14] => (item=controller-0)\nok: [controller-0 -> 192.168.24.16] => (item=ceph-0)\n\nTASK [check if it is atomic host] **********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:38\nMonday 25 June 2018 06:04:48 -0400 (0:00:09.286) 0:00:09.551 *********** \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [set_fact is_atomic] ******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:45\nMonday 25 June 2018 06:04:48 -0400 (0:00:00.735) 0:00:10.287 *********** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nTASK [pull rhceph image] *******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:66\nMonday 25 June 2018 06:04:49 -0400 (0:00:00.262) 0:00:10.549 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:76\nMonday 25 June 2018 06:04:49 -0400 (0:00:00.191) 0:00:10.741 *********** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180625060449Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nMonday 25 June 2018 06:04:49 -0400 (0:00:00.162) 0:00:10.903 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029864\", \"end\": \"2018-06-25 10:04:50.267566\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:04:50.237702\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nMonday 25 June 2018 06:04:50 -0400 (0:00:00.662) 0:00:11.566 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nMonday 25 June 2018 06:04:50 -0400 (0:00:00.051) 0:00:11.617 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nMonday 25 June 2018 06:04:50 -0400 (0:00:00.047) 0:00:11.664 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nMonday 25 June 2018 06:04:50 -0400 (0:00:00.050) 0:00:11.715 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.029060\", \"end\": \"2018-06-25 10:04:50.971368\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:04:50.942308\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nMonday 25 June 2018 06:04:50 -0400 (0:00:00.554) 0:00:12.269 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nMonday 25 June 2018 06:04:50 -0400 (0:00:00.047) 0:00:12.317 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nMonday 25 June 2018 06:04:50 -0400 (0:00:00.048) 0:00:12.365 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.054) 0:00:12.420 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.046) 0:00:12.467 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.046) 0:00:12.513 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.045) 0:00:12.559 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.046) 0:00:12.605 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.047) 0:00:12.653 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.046) 0:00:12.699 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:12.744 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:12.788 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.043) 0:00:12.831 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:12.876 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.051) 0:00:12.928 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:12.972 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.045) 0:00:13.018 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.043) 0:00:13.061 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:13.106 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:13.151 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.045) 0:00:13.196 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.042) 0:00:13.238 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.045) 0:00:13.284 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nMonday 25 June 2018 06:04:51 -0400 (0:00:00.042) 0:00:13.326 *********** \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nMonday 25 June 2018 06:04:52 -0400 (0:00:00.540) 0:00:13.867 *********** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nMonday 25 June 2018 06:04:52 -0400 (0:00:00.076) 0:00:13.944 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nMonday 25 June 2018 06:04:52 -0400 (0:00:00.083) 0:00:14.027 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nMonday 25 June 2018 06:04:52 -0400 (0:00:00.068) 0:00:14.096 *********** \nok: [controller-0 -> 192.168.24.14] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nMonday 25 June 2018 06:04:52 -0400 (0:00:00.145) 0:00:14.242 *********** \nok: [controller-0 -> 192.168.24.14] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.037324\", \"end\": \"2018-06-25 10:04:53.520287\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-06-25 10:04:53.482963\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check if /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nMonday 25 June 2018 06:04:53 -0400 (0:00:00.579) 0:00:14.821 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nMonday 25 June 2018 06:04:53 -0400 (0:00:00.194) 0:00:15.016 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nMonday 25 June 2018 06:04:53 -0400 (0:00:00.050) 0:00:15.067 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.421) 0:00:15.488 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.053) 0:00:15.541 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.080) 0:00:15.622 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.054) 0:00:15.677 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.053) 0:00:15.730 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.043) 0:00:15.773 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.043) 0:00:15.816 *********** \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.190) 0:00:16.007 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.129) 0:00:16.137 *********** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.077) 0:00:16.214 *********** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.071) 0:00:16.285 *********** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nMonday 25 June 2018 06:04:54 -0400 (0:00:00.069) 0:00:16.354 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nMonday 25 June 2018 06:04:55 -0400 (0:00:00.047) 0:00:16.402 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nMonday 25 June 2018 06:04:55 -0400 (0:00:00.047) 0:00:16.449 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nMonday 25 June 2018 06:04:55 -0400 (0:00:00.043) 0:00:16.492 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nMonday 25 June 2018 06:04:55 -0400 (0:00:00.044) 0:00:16.536 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nMonday 25 June 2018 06:04:55 -0400 (0:00:00.042) 0:00:16.579 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nMonday 25 June 2018 06:04:55 -0400 (0:00:00.048) 0:00:16.628 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nMonday 25 June 2018 06:04:55 -0400 (0:00:00.046) 0:00:16.674 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nMonday 25 June 2018 06:04:55 -0400 (0:00:00.076) 0:00:16.751 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nMonday 25 June 2018 06:04:55 -0400 (0:00:00.065) 0:00:16.816 *********** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nMonday 25 June 2018 06:05:01 -0400 (0:00:05.684) 0:00:22.500 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nMonday 25 June 2018 06:05:01 -0400 (0:00:00.045) 0:00:22.546 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nMonday 25 June 2018 06:05:01 -0400 (0:00:00.055) 0:00:22.601 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nMonday 25 June 2018 06:05:01 -0400 (0:00:00.043) 0:00:22.644 *********** \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nMonday 25 June 2018 06:05:02 -0400 (0:00:00.984) 0:00:23.629 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nMonday 25 June 2018 06:05:02 -0400 (0:00:00.082) 0:00:23.711 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nMonday 25 June 2018 06:05:02 -0400 (0:00:00.041) 0:00:23.753 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026626\", \"end\": \"2018-06-25 10:05:02.989125\", \"rc\": 0, \"start\": \"2018-06-25 10:05:02.962499\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nMonday 25 June 2018 06:05:02 -0400 (0:00:00.531) 0:00:24.284 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nMonday 25 June 2018 06:05:02 -0400 (0:00:00.070) 0:00:24.355 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029066\", \"end\": \"2018-06-25 10:05:03.620571\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:05:03.591505\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nMonday 25 June 2018 06:05:03 -0400 (0:00:00.561) 0:00:24.916 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nMonday 25 June 2018 06:05:03 -0400 (0:00:00.088) 0:00:25.005 *********** \nok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nMonday 25 June 2018 06:05:03 -0400 (0:00:00.123) 0:00:25.128 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nMonday 25 June 2018 06:05:03 -0400 (0:00:00.087) 0:00:25.216 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nMonday 25 June 2018 06:05:03 -0400 (0:00:00.091) 0:00:25.308 *********** \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nMonday 25 June 2018 06:05:05 -0400 (0:00:01.284) 0:00:26.592 *********** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.245) 0:00:26.838 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.039) 0:00:26.878 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.040) 0:00:26.919 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.046) 0:00:26.965 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.045) 0:00:27.010 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.049) 0:00:27.060 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.040) 0:00:27.101 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.040) 0:00:27.142 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.043) 0:00:27.185 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.047) 0:00:27.233 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.042) 0:00:27.275 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.041) 0:00:27.316 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nMonday 25 June 2018 06:05:05 -0400 (0:00:00.042) 0:00:27.359 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.053) 0:00:27.412 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.044) 0:00:27.456 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.041) 0:00:27.498 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.053) 0:00:27.551 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.043) 0:00:27.594 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.045) 0:00:27.640 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.040) 0:00:27.680 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.043) 0:00:27.724 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.044) 0:00:27.769 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.041) 0:00:27.810 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.045) 0:00:27.855 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.041) 0:00:27.897 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.042) 0:00:27.939 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.045) 0:00:27.984 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.127) 0:00:28.112 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.045) 0:00:28.157 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nMonday 25 June 2018 06:05:06 -0400 (0:00:00.047) 0:00:28.204 *********** \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.834319\", \"end\": \"2018-06-25 10:05:24.277992\", \"rc\": 0, \"start\": \"2018-06-25 10:05:07.443673\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nMonday 25 June 2018 06:05:24 -0400 (0:00:17.379) 0:00:45.583 *********** \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.036651\", \"end\": \"2018-06-25 10:05:24.877314\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:05:24.840663\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nMonday 25 June 2018 06:05:24 -0400 (0:00:00.609) 0:00:46.193 *********** \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nMonday 25 June 2018 06:05:24 -0400 (0:00:00.069) 0:00:46.263 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nMonday 25 June 2018 06:05:24 -0400 (0:00:00.048) 0:00:46.311 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nMonday 25 June 2018 06:05:24 -0400 (0:00:00.045) 0:00:46.357 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nMonday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.401 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nMonday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.446 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nMonday 25 June 2018 06:05:25 -0400 (0:00:00.047) 0:00:46.493 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nMonday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.537 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nMonday 25 June 2018 06:05:25 -0400 (0:00:00.046) 0:00:46.584 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nMonday 25 June 2018 06:05:25 -0400 (0:00:00.047) 0:00:46.632 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nMonday 25 June 2018 06:05:25 -0400 (0:00:00.050) 0:00:46.682 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nMonday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.727 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nMonday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.772 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.617837\", \"end\": \"2018-06-25 10:05:26.655884\", \"rc\": 0, \"start\": \"2018-06-25 10:05:26.038047\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nMonday 25 June 2018 06:05:26 -0400 (0:00:01.180) 0:00:47.952 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nMonday 25 June 2018 06:05:26 -0400 (0:00:00.069) 0:00:48.022 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nMonday 25 June 2018 06:05:26 -0400 (0:00:00.044) 0:00:48.067 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nMonday 25 June 2018 06:05:26 -0400 (0:00:00.046) 0:00:48.114 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nMonday 25 June 2018 06:05:26 -0400 (0:00:00.072) 0:00:48.186 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nMonday 25 June 2018 06:05:26 -0400 (0:00:00.051) 0:00:48.238 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nMonday 25 June 2018 06:05:26 -0400 (0:00:00.043) 0:00:48.282 *********** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nMonday 25 June 2018 06:05:29 -0400 (0:00:02.454) 0:00:50.736 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nMonday 25 June 2018 06:05:29 -0400 (0:00:00.049) 0:00:50.785 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nMonday 25 June 2018 06:05:29 -0400 (0:00:00.047) 0:00:50.833 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nMonday 25 June 2018 06:05:29 -0400 (0:00:00.197) 0:00:51.031 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nMonday 25 June 2018 06:05:29 -0400 (0:00:00.046) 0:00:51.077 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nMonday 25 June 2018 06:05:29 -0400 (0:00:00.041) 0:00:51.119 *********** \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nMonday 25 June 2018 06:05:30 -0400 (0:00:00.537) 0:00:51.657 *********** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"677880bddaa262c511eb635c230f19e6a4ddfabe\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"13cb0c834a94e4916365ae02ba1fbe9e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 761, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921130.31-23484003935514/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nMonday 25 June 2018 06:05:33 -0400 (0:00:03.413) 0:00:55.070 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2\nMonday 25 June 2018 06:05:33 -0400 (0:00:00.044) 0:00:55.115 *********** \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2\nMonday 25 June 2018 06:05:33 -0400 (0:00:00.070) 0:00:55.186 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate monitor initial keyring] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2\nMonday 25 June 2018 06:05:33 -0400 (0:00:00.052) 0:00:55.238 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : read monitor initial keyring if it already exists] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11\nMonday 25 June 2018 06:05:33 -0400 (0:00:00.048) 0:00:55.287 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create monitor initial keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22\nMonday 25 June 2018 06:05:33 -0400 (0:00:00.047) 0:00:55.334 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set initial monitor key permissions] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34\nMonday 25 June 2018 06:05:33 -0400 (0:00:00.048) 0:00:55.383 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create (and fix ownership of) monitor directory] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.052) 0:00:55.436 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.046) 0:00:55.482 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.045) 0:00:55.528 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create custom admin keyring] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.043) 0:00:55.571 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set ownership of admin keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.041) 0:00:55.613 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : import admin keyring into mon keyring] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.047) 0:00:55.661 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs with keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.042) 0:00:55.703 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs without keyring] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.042) 0:00:55.745 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.041) 0:00:55.787 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add ceph-mon systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.054) 0:00:55.841 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : start the monitor service] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.050) 0:00:55.891 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : enable the ceph-mon.target service] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.052) 0:00:55.944 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : include ceph_keys.yml] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.045) 0:00:55.990 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : collect all the pools] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.041) 0:00:56.032 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : secure the cluster] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.038) 0:00:56.070 *********** \n\nTASK [ceph-mon : set_fact ceph_config_keys] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.044) 0:00:56.115 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : register rbd bootstrap key] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11\nMonday 25 June 2018 06:05:34 -0400 (0:00:00.180) 0:00:56.295 *********** \nok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17\nMonday 25 June 2018 06:05:35 -0400 (0:00:00.169) 0:00:56.465 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : stat for ceph config and keys] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22\nMonday 25 June 2018 06:05:35 -0400 (0:00:00.082) 0:00:56.548 *********** \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-mon : try to copy ceph keys] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33\nMonday 25 June 2018 06:05:36 -0400 (0:00:00.872) 0:00:57.421 *********** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with default ceph.conf] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2\nMonday 25 June 2018 06:05:36 -0400 (0:00:00.124) 0:00:57.546 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with custom ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18\nMonday 25 June 2018 06:05:36 -0400 (0:00:00.047) 0:00:57.594 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : delete populate-kv-store docker] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36\nMonday 25 June 2018 06:05:36 -0400 (0:00:00.047) 0:00:57.641 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43\nMonday 25 June 2018 06:05:36 -0400 (0:00:00.038) 0:00:57.680 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"8e2b129de045a7f1e572d1bfdd590c11edd51013\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"be7a3a8e4b79b94c82d572bbdfe17fb0\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 835, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921136.42-87207207447687/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : systemd start mon container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54\nMonday 25 June 2018 06:05:39 -0400 (0:00:02.964) 0:01:00.644 *********** \nok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmon.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.10 -e CLUSTER=ceph -e FSID=78ace352-763a-11e8-9c1d-525400166144 -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mon : configure ceph profile.d aliases] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2\nMonday 25 June 2018 06:05:40 -0400 (0:00:00.998) 0:01:01.642 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921140.4-40407098418836/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : wait for monitor socket to exist] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12\nMonday 25 June 2018 06:05:43 -0400 (0:00:02.860) 0:01:04.503 *********** \nchanged: [controller-0] => {\"attempts\": 1, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.106485\", \"end\": \"2018-06-25 10:05:43.830371\", \"rc\": 0, \"start\": \"2018-06-25 10:05:43.723886\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 376176 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-06-25 10:05:41.570204432 +0000\\nModify: 2018-06-25 10:05:41.570204432 +0000\\nChange: 2018-06-25 10:05:41.570204432 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 376176 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-06-25 10:05:41.570204432 +0000\", \"Modify: 2018-06-25 10:05:41.570204432 +0000\", \"Change: 2018-06-25 10:05:41.570204432 +0000\", \" Birth: -\"]}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19\nMonday 25 June 2018 06:05:43 -0400 (0:00:00.634) 0:01:05.137 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29\nMonday 25 June 2018 06:05:43 -0400 (0:00:00.086) 0:01:05.223 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39\nMonday 25 June 2018 06:05:43 -0400 (0:00:00.092) 0:01:05.316 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.10\"], \"delta\": \"0:00:00.209401\", \"end\": \"2018-06-25 10:05:44.855253\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:05:44.645852\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49\nMonday 25 June 2018 06:05:44 -0400 (0:00:00.837) 0:01:06.153 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59\nMonday 25 June 2018 06:05:44 -0400 (0:00:00.047) 0:01:06.201 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69\nMonday 25 June 2018 06:05:44 -0400 (0:00:00.044) 0:01:06.246 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : push ceph files to the ansible server] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2\nMonday 25 June 2018 06:05:44 -0400 (0:00:00.049) 0:01:06.295 *********** \nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"87f8e20ff9c54bcb76bf97228cb0ba705b439784\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"fe671f5606d3379d4ccf3ddb240723ce\", \"remote_checksum\": \"87f8e20ff9c54bcb76bf97228cb0ba705b439784\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"ae4c70255ca42eb77eacd1cf1db0492ada8c18ae\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"ec178c2da843c050d2e2ac237ae701a5\", \"remote_checksum\": \"ae4c70255ca42eb77eacd1cf1db0492ada8c18ae\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"2f594fd27d9a2938d207fd0e4dcd1fdb\", \"remote_checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"381a02ebfa1216a2a279ae665eeaebd1ce6de5f5\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"de60d3b20fec15a075e9f0d39a69d366\", \"remote_checksum\": \"381a02ebfa1216a2a279ae665eeaebd1ce6de5f5\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"3540de06c3ed498809bdddd6a350cae592455923\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"dc685a0d2a335b9e52bb10344037e6ac\", \"remote_checksum\": \"3540de06c3ed498809bdddd6a350cae592455923\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"c3545cb2f74ad0b3c3491481b9215a04221dc20f\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"e0dfc3e5a328b94796559ffa871a90e6\", \"remote_checksum\": \"c3545cb2f74ad0b3c3491481b9215a04221dc20f\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84\nMonday 25 June 2018 06:05:47 -0400 (0:00:03.032) 0:01:09.328 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97\nMonday 25 June 2018 06:05:47 -0400 (0:00:00.044) 0:01:09.372 *********** \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.385673\", \"end\": \"2018-06-25 10:05:49.085002\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-06-25 10:05:48.699329\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-mon : stat for ceph mgr key(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109\nMonday 25 June 2018 06:05:48 -0400 (0:00:01.009) 0:01:10.381 *********** \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1529921148.958226, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"ctime\": 1529921149.0672264, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 7654767, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1529921149.0672264, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744072441155520\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-mon : fetch ceph mgr key(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121\nMonday 25 June 2018 06:05:49 -0400 (0:00:00.602) 0:01:10.984 *********** \nchanged: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921149.0672264, u'block_size': 4096, u'inode': 7654767, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'charset': u'us-ascii', u'readable': True, u'version': u'18446744072441155520', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1529921148.958226, u'mimetype': u'text/plain', u'ctime': 1529921149.0672264, u'isblk': False, u'xgrp': False, u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'checksum': u'dce8b853b5430d214621f9e0ba7d2feebbb2a1a5', u'islnk': False, u'attributes': []}, u'changed': False, '_ansible_no_log': False, 'item': u'controller-0', '_ansible_item_result': True, 'failed': False, u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1529921148.958226, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"ctime\": 1529921149.0672264, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 7654767, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1529921149.0672264, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744072441155520\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"46173b1f477ccec40e6961621fd8c750\", \"remote_checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : configure crush hierarchy] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.572) 0:01:11.557 *********** \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create configured crush rules] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.044) 0:01:11.601 *********** \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get id for new default crush rule] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.056) 0:01:11.657 *********** \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.055) 0:01:11.713 *********** \nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.055) 0:01:11.769 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.075) 0:01:11.844 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add new default crush rule to ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.077) 0:01:11.921 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.048) 0:01:11.970 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.053) 0:01:12.024 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.042) 0:01:12.067 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.041) 0:01:12.108 *********** \nok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}\n\nTASK [ceph-mon : increase calamari logging level when debug is on] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:9\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.069) 0:01:12.177 *********** \nskipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : initialize the calamari server api] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:20\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.048) 0:01:12.225 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.015) 0:01:12.241 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nMonday 25 June 2018 06:05:50 -0400 (0:00:00.063) 0:01:12.304 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"a16eea5d614de2b10079cb91a04686e919ccc201\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"b59e1abae52d61eb05b9ff080771a551\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 1173, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921150.98-47213257884394/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nMonday 25 June 2018 06:05:53 -0400 (0:00:02.730) 0:01:15.034 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nMonday 25 June 2018 06:05:53 -0400 (0:00:00.082) 0:01:15.117 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nMonday 25 June 2018 06:05:53 -0400 (0:00:00.122) 0:01:15.239 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.166) 0:01:15.406 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.172) 0:01:15.579 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.045) 0:01:15.625 *********** \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.074) 0:01:15.699 *********** \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.073) 0:01:15.772 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.160) 0:01:15.933 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.164) 0:01:16.097 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.043) 0:01:16.140 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.053) 0:01:16.194 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nMonday 25 June 2018 06:05:54 -0400 (0:00:00.053) 0:01:16.247 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.158) 0:01:16.406 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.154) 0:01:16.560 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.044) 0:01:16.605 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.053) 0:01:16.659 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.049) 0:01:16.708 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.151) 0:01:16.860 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.132) 0:01:16.993 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.043) 0:01:17.036 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.050) 0:01:17.086 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.050) 0:01:17.137 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.060) 0:01:17.197 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nMonday 25 June 2018 06:05:55 -0400 (0:00:00.065) 0:01:17.262 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"f36b3460f6762a853a3dab1958afb7d83ff8f234\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"9d50588dc55f43284b00033b8b30edc3\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 570, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921155.93-172993666328884/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nMonday 25 June 2018 06:05:58 -0400 (0:00:02.645) 0:01:19.908 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nMonday 25 June 2018 06:05:58 -0400 (0:00:00.082) 0:01:19.990 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nMonday 25 June 2018 06:05:58 -0400 (0:00:00.121) 0:01:20.112 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:98\nMonday 25 June 2018 06:05:58 -0400 (0:00:00.097) 0:01:20.210 *********** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20180625060558Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mgrs] ********************************************************************\n\nTASK [set ceph manager install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:110\nMonday 25 June 2018 06:05:58 -0400 (0:00:00.138) 0:01:20.348 *********** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20180625060558Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nMonday 25 June 2018 06:05:59 -0400 (0:00:00.077) 0:01:20.426 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.030411\", \"end\": \"2018-06-25 10:05:59.682239\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:05:59.651828\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"cb29ae4ab48a\", \"stdout_lines\": [\"cb29ae4ab48a\"]}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nMonday 25 June 2018 06:05:59 -0400 (0:00:00.554) 0:01:20.980 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nMonday 25 June 2018 06:05:59 -0400 (0:00:00.045) 0:01:21.026 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nMonday 25 June 2018 06:05:59 -0400 (0:00:00.046) 0:01:21.073 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nMonday 25 June 2018 06:05:59 -0400 (0:00:00.043) 0:01:21.116 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.033465\", \"end\": \"2018-06-25 10:06:00.369884\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:00.336419\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.551) 0:01:21.668 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.045) 0:01:21.714 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:21.757 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.048) 0:01:21.805 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.041) 0:01:21.846 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.042) 0:01:21.889 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.040) 0:01:21.929 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.041) 0:01:21.971 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:22.015 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:22.059 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:22.103 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.042) 0:01:22.145 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.040) 0:01:22.186 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.042) 0:01:22.228 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.050) 0:01:22.278 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.046) 0:01:22.325 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nMonday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:22.368 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nMonday 25 June 2018 06:06:01 -0400 (0:00:00.041) 0:01:22.409 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nMonday 25 June 2018 06:06:01 -0400 (0:00:00.041) 0:01:22.451 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nMonday 25 June 2018 06:06:01 -0400 (0:00:00.045) 0:01:22.497 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nMonday 25 June 2018 06:06:01 -0400 (0:00:00.047) 0:01:22.545 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nMonday 25 June 2018 06:06:01 -0400 (0:00:00.044) 0:01:22.589 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nMonday 25 June 2018 06:06:01 -0400 (0:00:00.044) 0:01:22.633 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nMonday 25 June 2018 06:06:01 -0400 (0:00:00.042) 0:01:22.676 *********** \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nMonday 25 June 2018 06:06:01 -0400 (0:00:00.515) 0:01:23.192 *********** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nMonday 25 June 2018 06:06:01 -0400 (0:00:00.167) 0:01:23.360 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nMonday 25 June 2018 06:06:02 -0400 (0:00:00.071) 0:01:23.431 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nMonday 25 June 2018 06:06:02 -0400 (0:00:00.065) 0:01:23.497 *********** \nok: [controller-0 -> 192.168.24.14] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nMonday 25 June 2018 06:06:02 -0400 (0:00:00.259) 0:01:23.756 *********** \nok: [controller-0 -> 192.168.24.14] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.363071\", \"end\": \"2018-06-25 10:06:03.448818\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:03.085747\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"78ace352-763a-11e8-9c1d-525400166144\", \"stdout_lines\": [\"78ace352-763a-11e8-9c1d-525400166144\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nMonday 25 June 2018 06:06:03 -0400 (0:00:00.997) 0:01:24.753 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nMonday 25 June 2018 06:06:03 -0400 (0:00:00.321) 0:01:25.075 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nMonday 25 June 2018 06:06:03 -0400 (0:00:00.048) 0:01:25.124 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 50, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nMonday 25 June 2018 06:06:03 -0400 (0:00:00.192) 0:01:25.316 *********** \nok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"78ace352-763a-11e8-9c1d-525400166144\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nMonday 25 June 2018 06:06:03 -0400 (0:00:00.072) 0:01:25.389 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.078) 0:01:25.468 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.046) 0:01:25.514 *********** \nchanged: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 78ace352-763a-11e8-9c1d-525400166144 | tee /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.004942\", \"end\": \"2018-06-25 06:06:04.273707\", \"rc\": 0, \"start\": \"2018-06-25 06:06:04.268765\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"78ace352-763a-11e8-9c1d-525400166144\", \"stdout_lines\": [\"78ace352-763a-11e8-9c1d-525400166144\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.197) 0:01:25.711 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.042) 0:01:25.753 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.039) 0:01:25.793 *********** \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.082) 0:01:25.875 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:25.921 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.042) 0:01:25.964 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:26.009 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:26.054 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.048) 0:01:26.103 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.044) 0:01:26.147 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:26.192 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.044) 0:01:26.236 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:26.282 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.042) 0:01:26.324 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nMonday 25 June 2018 06:06:04 -0400 (0:00:00.055) 0:01:26.380 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nMonday 25 June 2018 06:06:05 -0400 (0:00:00.072) 0:01:26.453 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nMonday 25 June 2018 06:06:05 -0400 (0:00:00.066) 0:01:26.519 *********** \nok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nMonday 25 June 2018 06:06:10 -0400 (0:00:05.758) 0:01:32.278 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nMonday 25 June 2018 06:06:10 -0400 (0:00:00.043) 0:01:32.322 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nMonday 25 June 2018 06:06:10 -0400 (0:00:00.050) 0:01:32.372 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nMonday 25 June 2018 06:06:11 -0400 (0:00:00.042) 0:01:32.414 *********** \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nMonday 25 June 2018 06:06:12 -0400 (0:00:00.997) 0:01:33.412 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nMonday 25 June 2018 06:06:12 -0400 (0:00:00.067) 0:01:33.479 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nMonday 25 June 2018 06:06:12 -0400 (0:00:00.038) 0:01:33.518 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.027786\", \"end\": \"2018-06-25 10:06:12.772727\", \"rc\": 0, \"start\": \"2018-06-25 10:06:12.744941\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nMonday 25 June 2018 06:06:12 -0400 (0:00:00.552) 0:01:34.071 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nMonday 25 June 2018 06:06:12 -0400 (0:00:00.078) 0:01:34.149 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.030387\", \"end\": \"2018-06-25 10:06:13.407756\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:13.377369\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"cb29ae4ab48a\", \"stdout_lines\": [\"cb29ae4ab48a\"]}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.553) 0:01:34.703 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.053) 0:01:34.757 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.059) 0:01:34.816 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.057) 0:01:34.874 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.060) 0:01:34.934 *********** \nskipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.106) 0:01:35.041 *********** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.111) 0:01:35.152 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.045) 0:01:35.198 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.048) 0:01:35.246 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.054) 0:01:35.301 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nMonday 25 June 2018 06:06:13 -0400 (0:00:00.049) 0:01:35.351 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nMonday 25 June 2018 06:06:14 -0400 (0:00:00.052) 0:01:35.403 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nMonday 25 June 2018 06:06:14 -0400 (0:00:00.044) 0:01:35.448 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nMonday 25 June 2018 06:06:14 -0400 (0:00:00.049) 0:01:35.498 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nMonday 25 June 2018 06:06:14 -0400 (0:00:00.049) 0:01:35.547 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"cb29ae4ab48a\"], \"delta\": \"0:00:00.031135\", \"end\": \"2018-06-25 10:06:14.921687\", \"rc\": 0, \"start\": \"2018-06-25 10:06:14.890552\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034\\\",\\n \\\"Created\\\": \\\"2018-06-25T10:05:40.357427181Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 26141,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-06-25T10:05:40.580604569Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\",\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 1073741824,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 2147483648,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87-init/diff:/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff:/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"volume\\\",\\n \\\"Name\\\": \\\"8f3068437bdbae66693ee6dd4595d60e9eaff3d82d71ee0406b0e2dcd0a45c20\\\",\\n \\\"Source\\\": \\\"/var/lib/docker/volumes/8f3068437bdbae66693ee6dd4595d60e9eaff3d82d71ee0406b0e2dcd0a45c20/_data\\\",\\n \\\"Destination\\\": \\\"/etc/ganesha\\\",\\n \\\"Driver\\\": \\\"local\\\",\\n \\\"Mode\\\": \\\"\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.10\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=78ace352-763a-11e8-9c1d-525400166144\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-6\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"e052bee5d9af655352795529880d27cb4f393d81e46a07ca5c9dc883cb29c9c4\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"c9c6a3bb3898616d34a69d242692ca582d49c06f21e38e564a1a1599d7e4f817\\\",\\n \\\"EndpointID\\\": \\\"d91011ef02755b329a8a875f6807706c8374765c46706ed5043bb7dd08eab78d\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034\\\",\", \" \\\"Created\\\": \\\"2018-06-25T10:05:40.357427181Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 26141,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-06-25T10:05:40.580604569Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\",\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 1073741824,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 2147483648,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87-init/diff:/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff:/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"volume\\\",\", \" \\\"Name\\\": \\\"8f3068437bdbae66693ee6dd4595d60e9eaff3d82d71ee0406b0e2dcd0a45c20\\\",\", \" \\\"Source\\\": \\\"/var/lib/docker/volumes/8f3068437bdbae66693ee6dd4595d60e9eaff3d82d71ee0406b0e2dcd0a45c20/_data\\\",\", \" \\\"Destination\\\": \\\"/etc/ganesha\\\",\", \" \\\"Driver\\\": \\\"local\\\",\", \" \\\"Mode\\\": \\\"\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.10\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=78ace352-763a-11e8-9c1d-525400166144\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-6\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"e052bee5d9af655352795529880d27cb4f393d81e46a07ca5c9dc883cb29c9c4\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"c9c6a3bb3898616d34a69d242692ca582d49c06f21e38e564a1a1599d7e4f817\\\",\", \" \\\"EndpointID\\\": \\\"d91011ef02755b329a8a875f6807706c8374765c46706ed5043bb7dd08eab78d\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nMonday 25 June 2018 06:06:14 -0400 (0:00:00.692) 0:01:36.240 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nMonday 25 June 2018 06:06:14 -0400 (0:00:00.045) 0:01:36.285 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nMonday 25 June 2018 06:06:14 -0400 (0:00:00.050) 0:01:36.336 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nMonday 25 June 2018 06:06:14 -0400 (0:00:00.044) 0:01:36.381 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nMonday 25 June 2018 06:06:15 -0400 (0:00:00.049) 0:01:36.431 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nMonday 25 June 2018 06:06:15 -0400 (0:00:00.042) 0:01:36.473 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nMonday 25 June 2018 06:06:15 -0400 (0:00:00.042) 0:01:36.516 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\"], \"delta\": \"0:00:00.032827\", \"end\": \"2018-06-25 10:06:15.896393\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:15.863566\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nMonday 25 June 2018 06:06:15 -0400 (0:00:00.684) 0:01:37.200 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nMonday 25 June 2018 06:06:15 -0400 (0:00:00.047) 0:01:37.248 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nMonday 25 June 2018 06:06:15 -0400 (0:00:00.045) 0:01:37.293 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nMonday 25 June 2018 06:06:15 -0400 (0:00:00.041) 0:01:37.335 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.128) 0:01:37.463 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.044) 0:01:37.508 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.042) 0:01:37.551 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.076) 0:01:37.627 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.046) 0:01:37.674 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.043) 0:01:37.717 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.052) 0:01:37.769 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.048) 0:01:37.817 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.044) 0:01:37.862 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nMonday 25 June 2018 06:06:16 -0400 (0:00:00.043) 0:01:37.906 *********** \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.038939\", \"end\": \"2018-06-25 10:06:17.182628\", \"rc\": 0, \"start\": \"2018-06-25 10:06:17.143689\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nMonday 25 June 2018 06:06:17 -0400 (0:00:00.582) 0:01:38.489 *********** \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.031376\", \"end\": \"2018-06-25 10:06:17.748448\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:17.717072\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nMonday 25 June 2018 06:06:17 -0400 (0:00:00.570) 0:01:39.059 *********** \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nMonday 25 June 2018 06:06:17 -0400 (0:00:00.074) 0:01:39.133 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nMonday 25 June 2018 06:06:17 -0400 (0:00:00.047) 0:01:39.181 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nMonday 25 June 2018 06:06:17 -0400 (0:00:00.050) 0:01:39.232 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nMonday 25 June 2018 06:06:17 -0400 (0:00:00.044) 0:01:39.277 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nMonday 25 June 2018 06:06:17 -0400 (0:00:00.045) 0:01:39.322 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nMonday 25 June 2018 06:06:17 -0400 (0:00:00.050) 0:01:39.372 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nMonday 25 June 2018 06:06:18 -0400 (0:00:00.048) 0:01:39.421 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nMonday 25 June 2018 06:06:18 -0400 (0:00:00.056) 0:01:39.478 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nMonday 25 June 2018 06:06:18 -0400 (0:00:00.052) 0:01:39.530 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nMonday 25 June 2018 06:06:18 -0400 (0:00:00.051) 0:01:39.581 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nMonday 25 June 2018 06:06:18 -0400 (0:00:00.045) 0:01:39.627 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nMonday 25 June 2018 06:06:18 -0400 (0:00:00.047) 0:01:39.674 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.528437\", \"end\": \"2018-06-25 10:06:19.423155\", \"rc\": 0, \"start\": \"2018-06-25 10:06:18.894718\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nMonday 25 June 2018 06:06:19 -0400 (0:00:01.046) 0:01:40.721 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nMonday 25 June 2018 06:06:19 -0400 (0:00:00.079) 0:01:40.800 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nMonday 25 June 2018 06:06:19 -0400 (0:00:00.052) 0:01:40.853 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nMonday 25 June 2018 06:06:19 -0400 (0:00:00.048) 0:01:40.902 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nMonday 25 June 2018 06:06:19 -0400 (0:00:00.081) 0:01:40.983 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nMonday 25 June 2018 06:06:19 -0400 (0:00:00.049) 0:01:41.033 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nMonday 25 June 2018 06:06:19 -0400 (0:00:00.049) 0:01:41.082 *********** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nMonday 25 June 2018 06:06:22 -0400 (0:00:02.315) 0:01:43.398 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nMonday 25 June 2018 06:06:22 -0400 (0:00:00.048) 0:01:43.447 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nMonday 25 June 2018 06:06:22 -0400 (0:00:00.051) 0:01:43.498 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nMonday 25 June 2018 06:06:22 -0400 (0:00:00.195) 0:01:43.693 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nMonday 25 June 2018 06:06:22 -0400 (0:00:00.052) 0:01:43.746 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nMonday 25 June 2018 06:06:22 -0400 (0:00:00.046) 0:01:43.792 *********** \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nMonday 25 June 2018 06:06:22 -0400 (0:00:00.537) 0:01:44.329 *********** \nok: [controller-0] => {\"changed\": false, \"checksum\": \"677880bddaa262c511eb635c230f19e6a4ddfabe\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"13cb0c834a94e4916365ae02ba1fbe9e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 761, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921182.99-109746653871474/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nMonday 25 June 2018 06:06:24 -0400 (0:00:01.848) 0:01:46.178 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2\nMonday 25 June 2018 06:06:24 -0400 (0:00:00.049) 0:01:46.227 *********** \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mgr : create mgr directory] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2\nMonday 25 June 2018 06:06:25 -0400 (0:00:00.192) 0:01:46.420 *********** \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10\nMonday 25 June 2018 06:06:25 -0400 (0:00:00.539) 0:01:46.960 *********** \nchanged: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"46173b1f477ccec40e6961621fd8c750\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921185.72-95684110235710/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set mgr key permissions] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24\nMonday 25 June 2018 06:06:28 -0400 (0:00:02.802) 0:01:49.763 *********** \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}\n\nTASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2\nMonday 25 June 2018 06:06:28 -0400 (0:00:00.629) 0:01:50.392 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : install ceph mgr for debian] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9\nMonday 25 June 2018 06:06:29 -0400 (0:00:00.046) 0:01:50.438 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17\nMonday 25 June 2018 06:06:29 -0400 (0:00:00.044) 0:01:50.483 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25\nMonday 25 June 2018 06:06:29 -0400 (0:00:00.052) 0:01:50.535 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : start and add that the mgr service to the init sequence] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35\nMonday 25 June 2018 06:06:29 -0400 (0:00:00.046) 0:01:50.581 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2\nMonday 25 June 2018 06:06:29 -0400 (0:00:00.047) 0:01:50.629 *********** \nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"fb2f3078fffe963a7fd0473c7b908931939d5c73\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"7b527fb0a44d25cf825cb2b6fcb2b07e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 733, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921189.37-59898925764271/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mgr : systemd start mgr container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13\nMonday 25 June 2018 06:06:32 -0400 (0:00:02.982) 0:01:53.612 *********** \nok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmgr.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19\nMonday 25 June 2018 06:06:33 -0400 (0:00:00.788) 0:01:54.401 *********** \nchanged: [controller-0 -> 192.168.24.14] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.340218\", \"end\": \"2018-06-25 10:06:34.009653\", \"rc\": 0, \"start\": \"2018-06-25 10:06:33.669435\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[]}\"]}\n\nTASK [ceph-mgr : set _ceph_mgr_modules fact] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26\nMonday 25 June 2018 06:06:33 -0400 (0:00:00.905) 0:01:55.306 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [], \"enabled_modules\": [\"restful\", \"status\"]}}, \"changed\": false}\n\nTASK [ceph-mgr : disable ceph mgr enabled modules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:30\nMonday 25 June 2018 06:06:34 -0400 (0:00:00.104) 0:01:55.411 *********** \nchanged: [controller-0 -> 192.168.24.14] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:01.367127\", \"end\": \"2018-06-25 10:06:36.007945\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-06-25 10:06:34.640818\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add modules to ceph-mgr] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:41\nMonday 25 June 2018 06:06:35 -0400 (0:00:01.938) 0:01:57.349 *********** \nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nMonday 25 June 2018 06:06:35 -0400 (0:00:00.030) 0:01:57.380 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nMonday 25 June 2018 06:06:36 -0400 (0:00:00.073) 0:01:57.453 *********** \nok: [controller-0] => {\"changed\": false, \"checksum\": \"f36b3460f6762a853a3dab1958afb7d83ff8f234\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 570, \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nMonday 25 June 2018 06:06:38 -0400 (0:00:01.990) 0:01:59.443 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nMonday 25 June 2018 06:06:38 -0400 (0:00:00.082) 0:01:59.526 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nMonday 25 June 2018 06:06:38 -0400 (0:00:00.119) 0:01:59.646 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph manager install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:129\nMonday 25 June 2018 06:06:38 -0400 (0:00:00.092) 0:01:59.738 *********** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20180625060638Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY [osds] ********************************************************************\n\nTASK [set ceph osd install 'In Progress'] **************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:141\nMonday 25 June 2018 06:06:38 -0400 (0:00:00.143) 0:01:59.882 *********** \nok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20180625060638Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nMonday 25 June 2018 06:06:38 -0400 (0:00:00.080) 0:01:59.962 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nMonday 25 June 2018 06:06:38 -0400 (0:00:00.040) 0:02:00.002 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.026145\", \"end\": \"2018-06-25 10:06:39.219526\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:39.193381\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.502) 0:02:00.505 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.043) 0:02:00.549 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.040) 0:02:00.590 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.045) 0:02:00.635 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:00.674 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:00.712 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.039) 0:02:00.751 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.036) 0:02:00.788 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.037) 0:02:00.826 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.039) 0:02:00.865 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.041) 0:02:00.906 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:00.945 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.039) 0:02:00.984 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:01.023 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.039) 0:02:01.062 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:01.101 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.045) 0:02:01.146 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.144) 0:02:01.290 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.041) 0:02:01.332 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nMonday 25 June 2018 06:06:39 -0400 (0:00:00.040) 0:02:01.373 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.040) 0:02:01.413 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.039) 0:02:01.453 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.038) 0:02:01.491 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.046) 0:02:01.537 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.038) 0:02:01.576 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.040) 0:02:01.617 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.040) 0:02:01.658 *********** \nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.478) 0:02:02.136 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.067) 0:02:02.203 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.065) 0:02:02.269 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nMonday 25 June 2018 06:06:40 -0400 (0:00:00.063) 0:02:02.332 *********** \nok: [ceph-0 -> 192.168.24.14] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nMonday 25 June 2018 06:06:41 -0400 (0:00:00.132) 0:02:02.464 *********** \nok: [ceph-0 -> 192.168.24.14] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.366077\", \"end\": \"2018-06-25 10:06:42.095079\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:41.729002\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"78ace352-763a-11e8-9c1d-525400166144\", \"stdout_lines\": [\"78ace352-763a-11e8-9c1d-525400166144\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.932) 0:02:03.397 *********** \nok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.182) 0:02:03.579 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.046) 0:02:03.626 *********** \nok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.189) 0:02:03.816 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"78ace352-763a-11e8-9c1d-525400166144\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.067) 0:02:03.883 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.069) 0:02:03.952 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.040) 0:02:03.993 *********** \nok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 78ace352-763a-11e8-9c1d-525400166144 | tee /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.186) 0:02:04.180 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.046) 0:02:04.226 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.037) 0:02:04.264 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.064) 0:02:04.329 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nMonday 25 June 2018 06:06:42 -0400 (0:00:00.037) 0:02:04.366 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.066) 0:02:04.433 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.069) 0:02:04.502 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.069) 0:02:04.571 *********** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002517\", \"end\": \"2018-06-25 10:06:43.759815\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-06-25 10:06:43.757298\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.476) 0:02:05.048 *********** \nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-06-25 10:06:43.759815', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.002517', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-06-25 10:06:43.757298', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002517\", \"end\": \"2018-06-25 10:06:43.759815\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-06-25 10:06:43.757298\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.081) 0:02:05.130 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\"]}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.072) 0:02:05.203 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.039) 0:02:05.243 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.041) 0:02:05.285 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.042) 0:02:05.327 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nMonday 25 June 2018 06:06:43 -0400 (0:00:00.043) 0:02:05.371 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nMonday 25 June 2018 06:06:44 -0400 (0:00:00.065) 0:02:05.436 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nMonday 25 June 2018 06:06:44 -0400 (0:00:00.065) 0:02:05.501 *********** \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nMonday 25 June 2018 06:06:49 -0400 (0:00:04.990) 0:02:10.492 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nMonday 25 June 2018 06:06:49 -0400 (0:00:00.044) 0:02:10.536 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nMonday 25 June 2018 06:06:49 -0400 (0:00:00.044) 0:02:10.581 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nMonday 25 June 2018 06:06:49 -0400 (0:00:00.043) 0:02:10.625 *********** \nok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nMonday 25 June 2018 06:06:50 -0400 (0:00:00.906) 0:02:11.531 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nMonday 25 June 2018 06:06:50 -0400 (0:00:00.167) 0:02:11.699 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nMonday 25 June 2018 06:06:50 -0400 (0:00:00.039) 0:02:11.738 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.023208\", \"end\": \"2018-06-25 10:06:51.048492\", \"rc\": 0, \"start\": \"2018-06-25 10:06:51.025284\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nMonday 25 June 2018 06:06:50 -0400 (0:00:00.600) 0:02:12.338 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nMonday 25 June 2018 06:06:51 -0400 (0:00:00.063) 0:02:12.401 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.026892\", \"end\": \"2018-06-25 10:06:51.617577\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:51.590685\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nMonday 25 June 2018 06:06:51 -0400 (0:00:00.500) 0:02:12.902 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nMonday 25 June 2018 06:06:51 -0400 (0:00:00.171) 0:02:13.074 *********** \nok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nMonday 25 June 2018 06:06:51 -0400 (0:00:00.119) 0:02:13.194 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nMonday 25 June 2018 06:06:51 -0400 (0:00:00.077) 0:02:13.272 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nMonday 25 June 2018 06:06:51 -0400 (0:00:00.084) 0:02:13.356 *********** \nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1529921145.491, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"87f8e20ff9c54bcb76bf97228cb0ba705b439784\", \"ctime\": 1529921145.49, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756168, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921145.49, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1529921145.947, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"ae4c70255ca42eb77eacd1cf1db0492ada8c18ae\", \"ctime\": 1529921145.947, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756169, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921145.947, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1529921146.43, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"ctime\": 1529921146.43, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 223936, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921146.43, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1529921146.928, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"381a02ebfa1216a2a279ae665eeaebd1ce6de5f5\", \"ctime\": 1529921146.928, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 7030010, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921146.928, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1529921147.406, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"3540de06c3ed498809bdddd6a350cae592455923\", \"ctime\": 1529921147.406, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 10981164, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921147.406, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1529921147.902, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"c3545cb2f74ad0b3c3491481b9215a04221dc20f\", \"ctime\": 1529921147.902, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 13656890, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921147.902, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1529921186.373, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"ctime\": 1529921150.129, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756170, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921150.129, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nMonday 25 June 2018 06:06:53 -0400 (0:00:01.232) 0:02:14.589 *********** \nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921145.49, u'block_size': 4096, u'inode': 105756168, u'isgid': False, u'size': 159, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1529921145.491, u'isdir': False, u'ctime': 1529921145.49, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'87f8e20ff9c54bcb76bf97228cb0ba705b439784', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1529921145.491, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"87f8e20ff9c54bcb76bf97228cb0ba705b439784\", \"ctime\": 1529921145.49, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756168, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921145.49, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921145.947, u'block_size': 4096, u'inode': 105756169, u'isgid': False, u'size': 688, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1529921145.947, u'isdir': False, u'ctime': 1529921145.947, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'ae4c70255ca42eb77eacd1cf1db0492ada8c18ae', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1529921145.947, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"ae4c70255ca42eb77eacd1cf1db0492ada8c18ae\", \"ctime\": 1529921145.947, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756169, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921145.947, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921146.43, u'block_size': 4096, u'inode': 223936, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1529921146.43, u'isdir': False, u'ctime': 1529921146.43, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'502b9fd25b9d73522bc5c0029ec362bd3ef148be', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1529921146.43, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"ctime\": 1529921146.43, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 223936, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921146.43, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921146.928, u'block_size': 4096, u'inode': 7030010, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1529921146.928, u'isdir': False, u'ctime': 1529921146.928, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'381a02ebfa1216a2a279ae665eeaebd1ce6de5f5', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1529921146.928, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"381a02ebfa1216a2a279ae665eeaebd1ce6de5f5\", \"ctime\": 1529921146.928, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 7030010, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921146.928, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921147.406, u'block_size': 4096, u'inode': 10981164, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1529921147.406, u'isdir': False, u'ctime': 1529921147.406, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'3540de06c3ed498809bdddd6a350cae592455923', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1529921147.406, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"3540de06c3ed498809bdddd6a350cae592455923\", \"ctime\": 1529921147.406, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 10981164, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921147.406, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921147.902, u'block_size': 4096, u'inode': 13656890, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1529921147.902, u'isdir': False, u'ctime': 1529921147.902, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'c3545cb2f74ad0b3c3491481b9215a04221dc20f', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1529921147.902, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"c3545cb2f74ad0b3c3491481b9215a04221dc20f\", \"ctime\": 1529921147.902, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 13656890, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921147.902, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921150.129, u'block_size': 4096, u'inode': 105756170, u'isgid': False, u'size': 67, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1529921186.373, u'isdir': False, u'ctime': 1529921150.129, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'dce8b853b5430d214621f9e0ba7d2feebbb2a1a5', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1529921186.373, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"ctime\": 1529921150.129, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756170, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921150.129, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.271) 0:02:14.861 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.039) 0:02:14.900 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.037) 0:02:14.938 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.043) 0:02:14.981 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.042) 0:02:15.023 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.042) 0:02:15.065 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.038) 0:02:15.104 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.038) 0:02:15.143 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.045) 0:02:15.189 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.041) 0:02:15.230 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.047) 0:02:15.277 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.041) 0:02:15.319 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nMonday 25 June 2018 06:06:53 -0400 (0:00:00.042) 0:02:15.362 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.041) 0:02:15.404 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.049) 0:02:15.454 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.141) 0:02:15.595 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.040) 0:02:15.635 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.043) 0:02:15.679 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.039) 0:02:15.719 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.038) 0:02:15.757 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.037) 0:02:15.795 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.046) 0:02:15.841 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.036) 0:02:15.878 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.038) 0:02:15.916 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.041) 0:02:15.958 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.038) 0:02:15.996 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.038) 0:02:16.034 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.045) 0:02:16.080 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.037) 0:02:16.118 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nMonday 25 June 2018 06:06:54 -0400 (0:00:00.041) 0:02:16.159 *********** \nok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:17.395069\", \"end\": \"2018-06-25 10:07:12.744333\", \"rc\": 0, \"start\": \"2018-06-25 10:06:55.349264\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nMonday 25 June 2018 06:07:12 -0400 (0:00:17.877) 0:02:34.036 *********** \nchanged: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.025287\", \"end\": \"2018-06-25 10:07:13.241136\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:07:13.215849\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/adf529a68f129c324f6caf826daa5f1bce018463f36dfe2327784845fb5bcf1d/diff:/var/lib/docker/overlay2/17a450161a816794364817ac5f8af8e22dd8241580c1df1e76d45fa5ddd83ad5/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/adf529a68f129c324f6caf826daa5f1bce018463f36dfe2327784845fb5bcf1d/diff:/var/lib/docker/overlay2/17a450161a816794364817ac5f8af8e22dd8241580c1df1e76d45fa5ddd83ad5/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.502) 0:02:34.539 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.073) 0:02:34.612 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.042) 0:02:34.654 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.047) 0:02:34.702 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.042) 0:02:34.745 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.040) 0:02:34.785 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.040) 0:02:34.826 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.040) 0:02:34.866 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.043) 0:02:34.910 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.039) 0:02:34.949 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.040) 0:02:34.990 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.047) 0:02:35.038 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nMonday 25 June 2018 06:07:13 -0400 (0:00:00.041) 0:02:35.079 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.507126\", \"end\": \"2018-06-25 10:07:14.772392\", \"rc\": 0, \"start\": \"2018-06-25 10:07:14.265266\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nMonday 25 June 2018 06:07:14 -0400 (0:00:00.991) 0:02:36.070 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nMonday 25 June 2018 06:07:14 -0400 (0:00:00.068) 0:02:36.139 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nMonday 25 June 2018 06:07:14 -0400 (0:00:00.047) 0:02:36.186 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nMonday 25 June 2018 06:07:14 -0400 (0:00:00.045) 0:02:36.232 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nMonday 25 June 2018 06:07:14 -0400 (0:00:00.077) 0:02:36.309 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nMonday 25 June 2018 06:07:14 -0400 (0:00:00.043) 0:02:36.352 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nMonday 25 June 2018 06:07:15 -0400 (0:00:00.040) 0:02:36.393 *********** \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nMonday 25 June 2018 06:07:17 -0400 (0:00:02.219) 0:02:38.612 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nMonday 25 June 2018 06:07:17 -0400 (0:00:00.047) 0:02:38.660 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nMonday 25 June 2018 06:07:17 -0400 (0:00:00.045) 0:02:38.705 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nMonday 25 June 2018 06:07:17 -0400 (0:00:00.053) 0:02:38.758 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nMonday 25 June 2018 06:07:17 -0400 (0:00:00.043) 0:02:38.802 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nMonday 25 June 2018 06:07:17 -0400 (0:00:00.039) 0:02:38.842 *********** \nchanged: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nMonday 25 June 2018 06:07:17 -0400 (0:00:00.484) 0:02:39.327 *********** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0\nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"4bdff1e64c4372595a71f3d358e1307a2bca8746\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"6419252280b3f08dc5a58f4743435fb1\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 871, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921237.98-252098485242928/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nMonday 25 June 2018 06:07:20 -0400 (0:00:03.026) 0:02:42.353 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure public_network configured] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.041) 0:02:42.394 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure cluster_network configured] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.044) 0:02:42.438 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure journal_size configured] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.045) 0:02:42.484 *********** \nok: [ceph-0] => {\n \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"\n}\n\nTASK [ceph-osd : make sure an osd scenario was chosen] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.074) 0:02:42.558 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure a valid osd scenario was chosen] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.046) 0:02:42.604 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify devices have been provided] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.044) 0:02:42.649 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.051) 0:02:42.701 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify lvm_volumes have been provided] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.044) 0:02:42.746 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.053) 0:02:42.799 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the devices variable is a list] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.048) 0:02:42.848 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify dedicated devices have been provided] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.049) 0:02:42.897 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.048) 0:02:42.945 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.046) 0:02:42.992 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include system_tuning.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.045) 0:02:43.037 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0\n\nTASK [ceph-osd : disable osd directory parsing by updatedb] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.074) 0:02:43.112 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : disable osd directory path in updatedb.conf] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.039) 0:02:43.151 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : create tmpfiles.d directory] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22\nMonday 25 June 2018 06:07:21 -0400 (0:00:00.040) 0:02:43.192 *********** \nok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}\n\nTASK [ceph-osd : disable transparent hugepage] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33\nMonday 25 June 2018 06:07:22 -0400 (0:00:00.466) 0:02:43.658 *********** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921242.31-245170607652251/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : get default vm.min_free_kbytes] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45\nMonday 25 June 2018 06:07:24 -0400 (0:00:02.289) 0:02:45.948 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.003830\", \"end\": \"2018-06-25 10:07:25.128957\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:07:25.125127\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}\n\nTASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52\nMonday 25 June 2018 06:07:25 -0400 (0:00:00.468) 0:02:46.416 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}\n\nTASK [ceph-osd : apply operating system tuning] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56\nMonday 25 June 2018 06:07:25 -0400 (0:00:00.061) 0:02:46.478 *********** \nchanged: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}\nchanged: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}\nchanged: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}\nchanged: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}\nchanged: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}\n\nTASK [ceph-osd : install dependencies] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10\nMonday 25 June 2018 06:07:27 -0400 (0:00:02.411) 0:02:48.890 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include common.yml] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18\nMonday 25 June 2018 06:07:27 -0400 (0:00:00.044) 0:02:48.934 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0\n\nTASK [ceph-osd : create bootstrap-osd and osd directories] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2\nMonday 25 June 2018 06:07:27 -0400 (0:00:00.068) 0:02:49.003 *********** \nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-osd : copy ceph key(s) if needed] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15\nMonday 25 June 2018 06:07:28 -0400 (0:00:00.932) 0:02:49.936 *********** \nchanged: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"2f594fd27d9a2938d207fd0e4dcd1fdb\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921248.59-234204469734268/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2\nMonday 25 June 2018 06:07:30 -0400 (0:00:02.391) 0:02:52.327 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11\nMonday 25 June 2018 06:07:30 -0400 (0:00:00.038) 0:02:52.366 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.051) 0:02:52.417 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.050) 0:02:52.467 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.045) 0:02:52.512 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.040) 0:02:52.553 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.046) 0:02:52.600 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.036) 0:02:52.636 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.067) 0:02:52.703 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.042) 0:02:52.745 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.037) 0:02:52.782 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.044) 0:02:52.827 *********** \nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sectors': u'41943040', u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-06-25-05-49-20-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-06-25-05-49-20-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'fca00eb7-6dba-4ea0-b1e5-202b819f2b85', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'fca00eb7-6dba-4ea0-b1e5-202b819f2b85']}, u'sectors': u'41938911', u'start': u'4096', u'holders': [], u'size': u'20.00 GB'}}, u'holders': [], u'size': u'20.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-06-25-05-49-20-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-06-25-05-49-20-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"fca00eb7-6dba-4ea0-b1e5-202b819f2b85\"]}, \"sectors\": \"41938911\", \"sectorsize\": 512, \"size\": \"20.00 GB\", \"start\": \"4096\", \"uuid\": \"fca00eb7-6dba-4ea0-b1e5-202b819f2b85\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"41943040\", \"sectorsize\": \"512\", \"size\": \"20.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sectors': u'83886080', u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'40.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"83886080\", \"sectorsize\": \"512\", \"size\": \"40.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : resolve dedicated device link(s)] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.050) 0:02:52.877 *********** \n\nTASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.039) 0:02:52.917 *********** \n\nTASK [ceph-osd : set_fact build final dedicated_devices list] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.038) 0:02:52.955 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : read information about the devices] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29\nMonday 25 June 2018 06:07:31 -0400 (0:00:00.039) 0:02:52.995 *********** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\n\nTASK [ceph-osd : check the partition status of the osd disks] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2\nMonday 25 June 2018 06:07:32 -0400 (0:00:00.693) 0:02:53.688 *********** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007642\", \"end\": \"2018-06-25 10:07:33.004123\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:32.996481\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create gpt disk label] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11\nMonday 25 June 2018 06:07:32 -0400 (0:00:00.602) 0:02:54.291 *********** \nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-06-25 10:07:33.004123', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'start': u'2018-06-25 10:07:32.996481', u'delta': u'0:00:00.007642', 'item': u'/dev/vdb', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u'', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.013028\", \"end\": \"2018-06-25 10:07:33.606261\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007642\", \"end\": \"2018-06-25 10:07:33.004123\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:32.996481\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-06-25 10:07:33.593233\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : include scenarios/collocated.yml] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41\nMonday 25 June 2018 06:07:33 -0400 (0:00:00.607) 0:02:54.899 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0\n\nTASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5\nMonday 25 June 2018 06:07:33 -0400 (0:00:00.082) 0:02:54.981 *********** \nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u\"unit 'MiB' print\", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 40960.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-6\", \"delta\": \"0:00:06.868230\", \"end\": \"2018-06-25 10:07:41.149753\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-06-25 10:07:34.281523\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-06-25 10:07:34'\\n+common_functions.sh:13: log(): echo '2018-06-25 10:07:34 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 77cb590c-de4c-4507-b665-fd28566a15bc /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:77cb590c-de4c-4507-b665-fd28566a15bc --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:0f2ca894-7390-4044-aedf-d8eeb9dcbdd0 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.etxf1G with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.etxf1G\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.etxf1G\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.etxf1G\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/ceph_fsid.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/ceph_fsid.23091.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/fsid.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/fsid.23091.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/magic.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/magic.23091.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/journal_uuid.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/journal_uuid.23091.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.etxf1G/journal -> /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/type.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/type.23091.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.etxf1G\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.etxf1G\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-06-25 10:07:34'\", \"+common_functions.sh:13: log(): echo '2018-06-25 10:07:34 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 77cb590c-de4c-4507-b665-fd28566a15bc /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:77cb590c-de4c-4507-b665-fd28566a15bc --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:0f2ca894-7390-4044-aedf-d8eeb9dcbdd0 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.etxf1G with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.etxf1G\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.etxf1G\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.etxf1G\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/ceph_fsid.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/ceph_fsid.23091.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/fsid.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/fsid.23091.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/magic.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/magic.23091.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/journal_uuid.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/journal_uuid.23091.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.etxf1G/journal -> /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/type.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/type.23091.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.etxf1G\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.etxf1G\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-06-25 10:07:34 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-06-25 10:07:34 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-06-25 10:07:34 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-06-25 10:07:34 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.9L24Q2a7qz' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-06-25 10:07:34 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=2588607 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=10354427, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=5055, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-06-25 10:07:34 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-06-25 10:07:34 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-06-25 10:07:34 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-06-25 10:07:34 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.9L24Q2a7qz' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-06-25 10:07:34 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=2588607 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=10354427, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=5055, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}\n\nTASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30\nMonday 25 June 2018 06:07:41 -0400 (0:00:07.479) 0:03:02.461 *********** \nskipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53\nMonday 25 June 2018 06:07:41 -0400 (0:00:00.053) 0:03:02.514 *********** \nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u\"unit 'MiB' print\", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 40960.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/non-collocated.yml] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48\nMonday 25 June 2018 06:07:41 -0400 (0:00:00.055) 0:03:02.569 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/lvm.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56\nMonday 25 June 2018 06:07:41 -0400 (0:00:00.043) 0:03:02.613 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include activate_osds.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64\nMonday 25 June 2018 06:07:41 -0400 (0:00:00.042) 0:03:02.656 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include start_osds.yml] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72\nMonday 25 June 2018 06:07:41 -0400 (0:00:00.043) 0:03:02.699 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include docker/main.yml] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80\nMonday 25 June 2018 06:07:41 -0400 (0:00:00.040) 0:03:02.740 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0\n\nTASK [ceph-osd : include start_docker_osd.yml] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2\nMonday 25 June 2018 06:07:41 -0400 (0:00:00.078) 0:03:02.818 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0\n\nTASK [ceph-osd : umount ceph disk (if on openstack)] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4\nMonday 25 June 2018 06:07:41 -0400 (0:00:00.063) 0:03:02.882 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : test if the container image has the disk_list function] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13\nMonday 25 June 2018 06:07:41 -0400 (0:00:00.039) 0:03:02.921 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-6\", \"disk_list.sh\"], \"delta\": \"0:00:00.430992\", \"end\": \"2018-06-25 10:07:42.569168\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:07:42.138176\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2bh/43d\\tInode: 33605760 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-04-18 13:02:03.000000000 +0000\\nModify: 2018-04-18 13:02:03.000000000 +0000\\nChange: 2018-06-25 10:07:00.699558531 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2bh/43d\\tInode: 33605760 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-04-18 13:02:03.000000000 +0000\", \"Modify: 2018-04-18 13:02:03.000000000 +0000\", \"Change: 2018-06-25 10:07:00.699558531 +0000\", \" Birth: -\"]}\n\nTASK [ceph-osd : generate ceph osd docker run script] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19\nMonday 25 June 2018 06:07:42 -0400 (0:00:00.937) 0:03:03.859 *********** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"6e2ae7f97fe861dbe9824133e6c912df4b7c8959\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"97ef03a63aca5a84f85a7a061ad42a61\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 1000, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921262.51-211790013762956/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:28\nMonday 25 June 2018 06:07:44 -0400 (0:00:02.274) 0:03:06.133 *********** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921264.78-219742478778726/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : systemd start osd container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:39\nMonday 25 June 2018 06:07:47 -0400 (0:00:02.434) 0:03:08.568 *********** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14904\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14904\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87\nMonday 25 June 2018 06:07:47 -0400 (0:00:00.774) 0:03:09.343 *********** \nok: [ceph-0] => (item={u'mon_cap': u'allow r', u'name': u'client.openstack', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r\", \"name\": \"client.openstack\", \"osd_cap\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}}\nok: [ceph-0] => (item={u'mon_cap': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', u'osd_cap': u'allow rw'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mds_cap\": \"allow *\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"name\": \"client.manila\", \"osd_cap\": \"allow rw\"}}\nok: [ceph-0] => (item={u'mon_cap': u'allow rw', u'name': u'client.radosgw', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', u'osd_cap': u'allow rwx'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow rw\", \"name\": \"client.radosgw\", \"osd_cap\": \"allow rwx\"}}\n\nTASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95\nMonday 25 June 2018 06:07:48 -0400 (0:00:00.115) 0:03:09.458 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"openstack_keys\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false}\n\nTASK [ceph-osd : wait for all osd to be up] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2\nMonday 25 June 2018 06:07:48 -0400 (0:00:00.106) 0:03:09.565 *********** \nchanged: [ceph-0 -> 192.168.24.14] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.685557\", \"end\": \"2018-06-25 10:07:49.589654\", \"rc\": 0, \"start\": \"2018-06-25 10:07:48.904097\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : list existing pool(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12\nMonday 25 June 2018 06:07:49 -0400 (0:00:01.356) 0:03:10.922 *********** \nchanged: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.353822\", \"end\": \"2018-06-25 10:07:50.587628\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:50.233806\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.14] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.334785\", \"end\": \"2018-06-25 10:07:51.419709\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:51.084924\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.323941\", \"end\": \"2018-06-25 10:07:52.236761\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:51.912820\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.337900\", \"end\": \"2018-06-25 10:07:53.066076\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:52.728176\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.350538\", \"end\": \"2018-06-25 10:07:53.923307\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:53.572769\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create openstack pool(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21\nMonday 25 June 2018 06:07:53 -0400 (0:00:04.339) 0:03:15.261 *********** \nok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'images'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'images', u'size'], u'end': u'2018-06-25 10:07:50.587628', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:50.233806', u'delta': u'0:00:00.353822', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'images'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"images\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.009998\", \"end\": \"2018-06-25 10:07:55.587323\", \"item\": [{\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.353822\", \"end\": \"2018-06-25 10:07:50.587628\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:50.233806\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:07:54.577325\", \"stderr\": \"pool 'images' created\", \"stderr_lines\": [\"pool 'images' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'metrics'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'metrics', u'size'], u'end': u'2018-06-25 10:07:51.419709', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:51.084924', u'delta': u'0:00:00.334785', 'item': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'metrics'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"metrics\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.953067\", \"end\": \"2018-06-25 10:07:57.065992\", \"item\": [{\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.334785\", \"end\": \"2018-06-25 10:07:51.419709\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:51.084924\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:07:56.112925\", \"stderr\": \"pool 'metrics' created\", \"stderr_lines\": [\"pool 'metrics' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'backups'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'backups', u'size'], u'end': u'2018-06-25 10:07:52.236761', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:51.912820', u'delta': u'0:00:00.323941', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'backups'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"backups\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.905811\", \"end\": \"2018-06-25 10:07:58.478913\", \"item\": [{\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.323941\", \"end\": \"2018-06-25 10:07:52.236761\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:51.912820\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:07:57.573102\", \"stderr\": \"pool 'backups' created\", \"stderr_lines\": [\"pool 'backups' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'vms'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'vms', u'size'], u'end': u'2018-06-25 10:07:53.066076', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:52.728176', u'delta': u'0:00:00.337900', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'vms'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"vms\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.956912\", \"end\": \"2018-06-25 10:07:59.961855\", \"item\": [{\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.337900\", \"end\": \"2018-06-25 10:07:53.066076\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:52.728176\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:07:59.004943\", \"stderr\": \"pool 'vms' created\", \"stderr_lines\": [\"pool 'vms' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'volumes'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'volumes', u'size'], u'end': u'2018-06-25 10:07:53.923307', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:53.572769', u'delta': u'0:00:00.350538', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'volumes'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"volumes\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.000109\", \"end\": \"2018-06-25 10:08:01.469442\", \"item\": [{\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.350538\", \"end\": \"2018-06-25 10:07:53.923307\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:53.572769\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:08:00.469333\", \"stderr\": \"pool 'volumes' created\", \"stderr_lines\": [\"pool 'volumes' created\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : assign application to pool(s)] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:41\nMonday 25 June 2018 06:08:01 -0400 (0:00:07.545) 0:03:22.807 *********** \nok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"images\", \"rbd\"], \"delta\": \"0:00:01.340092\", \"end\": \"2018-06-25 10:08:03.437552\", \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:02.097460\", \"stderr\": \"enabled application 'rbd' on pool 'images'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"metrics\", \"openstack_gnocchi\"], \"delta\": \"0:00:00.509473\", \"end\": \"2018-06-25 10:08:04.457169\", \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:03.947696\", \"stderr\": \"enabled application 'openstack_gnocchi' on pool 'metrics'\", \"stderr_lines\": [\"enabled application 'openstack_gnocchi' on pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"backups\", \"rbd\"], \"delta\": \"0:00:00.496703\", \"end\": \"2018-06-25 10:08:05.473265\", \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:04.976562\", \"stderr\": \"enabled application 'rbd' on pool 'backups'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"vms\", \"rbd\"], \"delta\": \"0:00:00.510497\", \"end\": \"2018-06-25 10:08:06.491356\", \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:05.980859\", \"stderr\": \"enabled application 'rbd' on pool 'vms'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"volumes\", \"rbd\"], \"delta\": \"0:00:00.488328\", \"end\": \"2018-06-25 10:08:07.469816\", \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:06.981488\", \"stderr\": \"enabled application 'rbd' on pool 'volumes'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create openstack cephx key(s)] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:50\nMonday 25 June 2018 06:08:07 -0400 (0:00:05.985) 0:03:28.792 *********** \nchanged: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mon': u'allow r', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.814098\", \"end\": \"2018-06-25 10:08:09.116122\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:08.302024\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', 'mgr': u'allow *'}, 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'mode': u'0600'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.manila.keyring\"], \"delta\": \"0:00:00.803924\", \"end\": \"2018-06-25 10:08:10.438462\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:09.634538\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'', 'osd': u'allow rwx', 'mon': u'allow rw', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.862252\", \"end\": \"2018-06-25 10:08:11.806074\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:10.943822\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : fetch openstack cephx key(s)] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:63\nMonday 25 June 2018 06:08:11 -0400 (0:00:04.338) 0:03:33.130 *********** \nchanged: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mon': u'allow r', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'name': u'client.openstack'}) => {\"changed\": true, \"checksum\": \"56011607b6d88d1e1f856a2666ca00634ee8af81\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.client.openstack.keyring\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"md5sum\": \"6701009b87ea7660b3369abb8fdc0536\", \"remote_checksum\": \"56011607b6d88d1e1f856a2666ca00634ee8af81\", \"remote_md5sum\": null}\nchanged: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', 'mgr': u'allow *'}, 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'mode': u'0600'}) => {\"changed\": true, \"checksum\": \"c017bc60396016c3f00762471b81bf9c6cd4b443\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.client.manila.keyring\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"md5sum\": \"d2b0ce76144746e6b7bff711e577e4ac\", \"remote_checksum\": \"c017bc60396016c3f00762471b81bf9c6cd4b443\", \"remote_md5sum\": null}\nchanged: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'', 'osd': u'allow rwx', 'mon': u'allow rw', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'name': u'client.radosgw'}) => {\"changed\": true, \"checksum\": \"e69e019107a5ba0f730c2d85403f54a4f0dd5e61\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.client.radosgw.keyring\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"md5sum\": \"a5f4e837b2107c38fbb797ad153986ba\", \"remote_checksum\": \"e69e019107a5ba0f730c2d85403f54a4f0dd5e61\", \"remote_md5sum\": null}\n\nTASK [ceph-osd : copy to other mons the openstack cephx key(s)] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:71\nMonday 25 June 2018 06:08:13 -0400 (0:00:01.533) 0:03:34.664 *********** \nchanged: [ceph-0 -> 192.168.24.14] => (item=[u'controller-0', {'name': u'client.openstack', 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mgr': u'allow *', 'mon': u'allow r'}}]) => {\"changed\": true, \"checksum\": \"56011607b6d88d1e1f856a2666ca00634ee8af81\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.openstack.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 299, \"state\": \"file\", \"uid\": 167}\nchanged: [ceph-0 -> 192.168.24.14] => (item=[u'controller-0', {'mode': u'0600', 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mgr': u'allow *', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"'}}]) => {\"changed\": true, \"checksum\": \"c017bc60396016c3f00762471b81bf9c6cd4b443\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.manila.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 276, \"state\": \"file\", \"uid\": 167}\nchanged: [ceph-0 -> 192.168.24.14] => (item=[u'controller-0', {'name': u'client.radosgw', 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'caps': {'mds': u'', 'osd': u'allow rwx', 'mgr': u'allow *', 'mon': u'allow rw'}}]) => {\"changed\": true, \"checksum\": \"e69e019107a5ba0f730c2d85403f54a4f0dd5e61\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 149, \"state\": \"file\", \"uid\": 167}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nMonday 25 June 2018 06:08:19 -0400 (0:00:05.914) 0:03:40.579 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nMonday 25 June 2018 06:08:19 -0400 (0:00:00.069) 0:03:40.648 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nMonday 25 June 2018 06:08:19 -0400 (0:00:00.043) 0:03:40.692 *********** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nMonday 25 June 2018 06:08:19 -0400 (0:00:00.076) 0:03:40.769 *********** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nMonday 25 June 2018 06:08:19 -0400 (0:00:00.075) 0:03:40.844 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nMonday 25 June 2018 06:08:19 -0400 (0:00:00.062) 0:03:40.906 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nMonday 25 June 2018 06:08:19 -0400 (0:00:00.060) 0:03:40.967 *********** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"9a770971b362c519fc75c5228fc22dd8d4cc68aa\", \"dest\": \"/tmp/restart_osd_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"c42d82e9b9c002f16b40c524607c38ea\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 3060, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921299.64-250500451455531/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nMonday 25 June 2018 06:08:21 -0400 (0:00:02.288) 0:03:43.256 *********** \nskipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nMonday 25 June 2018 06:08:21 -0400 (0:00:00.079) 0:03:43.335 *********** \nskipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.084) 0:03:43.419 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.064) 0:03:43.484 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.064) 0:03:43.549 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.039) 0:03:43.588 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.050) 0:03:43.639 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.046) 0:03:43.685 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.062) 0:03:43.748 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.064) 0:03:43.812 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.040) 0:03:43.853 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.047) 0:03:43.900 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.045) 0:03:43.945 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.059) 0:03:44.005 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.063) 0:03:44.069 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.040) 0:03:44.109 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.046) 0:03:44.156 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.047) 0:03:44.204 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.059) 0:03:44.263 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.060) 0:03:44.324 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nMonday 25 June 2018 06:08:22 -0400 (0:00:00.040) 0:03:44.365 *********** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.071) 0:03:44.437 *********** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.076) 0:03:44.513 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph osd install 'Complete'] *****************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:156\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.083) 0:03:44.597 *********** \nok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"end\": \"20180625060823Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY [mdss] ********************************************************************\nskipping: no hosts matched\n\nPLAY [rgws] ********************************************************************\nskipping: no hosts matched\n\nPLAY [nfss] ********************************************************************\nskipping: no hosts matched\n\nPLAY [rbdmirrors] **************************************************************\nskipping: no hosts matched\n\nPLAY [restapis] ****************************************************************\nskipping: no hosts matched\n\nPLAY [clients] *****************************************************************\n\nTASK [set ceph client install 'In Progress'] ***********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:307\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.147) 0:03:44.744 *********** \nok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"start\": \"20180625060823Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.074) 0:03:44.819 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.044) 0:03:44.863 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.044) 0:03:44.908 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.043) 0:03:44.951 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.048) 0:03:45.000 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.042) 0:03:45.043 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.041) 0:03:45.085 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.044) 0:03:45.130 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.040) 0:03:45.171 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.038) 0:03:45.209 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.041) 0:03:45.250 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.041) 0:03:45.292 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.043) 0:03:45.335 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nMonday 25 June 2018 06:08:23 -0400 (0:00:00.048) 0:03:45.384 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.044) 0:03:45.429 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.475 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.520 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.046) 0:03:45.567 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.612 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.658 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.703 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.046) 0:03:45.749 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.795 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.186) 0:03:45.982 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.042) 0:03:46.024 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.044) 0:03:46.069 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.040) 0:03:46.110 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.043) 0:03:46.153 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nMonday 25 June 2018 06:08:24 -0400 (0:00:00.040) 0:03:46.194 *********** \nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nMonday 25 June 2018 06:08:25 -0400 (0:00:00.526) 0:03:46.721 *********** \nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nMonday 25 June 2018 06:08:25 -0400 (0:00:00.070) 0:03:46.791 *********** \nok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nMonday 25 June 2018 06:08:25 -0400 (0:00:00.072) 0:03:46.863 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nMonday 25 June 2018 06:08:25 -0400 (0:00:00.071) 0:03:46.935 *********** \nok: [compute-0 -> 192.168.24.14] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nMonday 25 June 2018 06:08:25 -0400 (0:00:00.133) 0:03:47.068 *********** \nok: [compute-0 -> 192.168.24.14] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.366285\", \"end\": \"2018-06-25 10:08:26.669548\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:08:26.303263\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"78ace352-763a-11e8-9c1d-525400166144\", \"stdout_lines\": [\"78ace352-763a-11e8-9c1d-525400166144\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nMonday 25 June 2018 06:08:26 -0400 (0:00:00.897) 0:03:47.966 *********** \nok: [compute-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nMonday 25 June 2018 06:08:26 -0400 (0:00:00.178) 0:03:48.145 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nMonday 25 June 2018 06:08:26 -0400 (0:00:00.046) 0:03:48.191 *********** \nok: [compute-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nMonday 25 June 2018 06:08:26 -0400 (0:00:00.178) 0:03:48.369 *********** \nok: [compute-0] => {\"ansible_facts\": {\"fsid\": \"78ace352-763a-11e8-9c1d-525400166144\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.072) 0:03:48.442 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.076) 0:03:48.519 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.048) 0:03:48.568 *********** \nok: [compute-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 78ace352-763a-11e8-9c1d-525400166144 | tee /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.180) 0:03:48.748 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.040) 0:03:48.788 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.036) 0:03:48.825 *********** \nok: [compute-0] => {\"ansible_facts\": {\"mds_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.066) 0:03:48.892 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.040) 0:03:48.932 *********** \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.067) 0:03:49.000 *********** \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.068) 0:03:49.068 *********** \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.068) 0:03:49.137 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.042) 0:03:49.179 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.041) 0:03:49.221 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.046) 0:03:49.267 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.039) 0:03:49.307 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.040) 0:03:49.348 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nMonday 25 June 2018 06:08:27 -0400 (0:00:00.041) 0:03:49.390 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nMonday 25 June 2018 06:08:28 -0400 (0:00:00.041) 0:03:49.432 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nMonday 25 June 2018 06:08:28 -0400 (0:00:00.073) 0:03:49.505 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nMonday 25 June 2018 06:08:28 -0400 (0:00:00.068) 0:03:49.574 *********** \nchanged: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nMonday 25 June 2018 06:08:33 -0400 (0:00:05.487) 0:03:55.062 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nMonday 25 June 2018 06:08:33 -0400 (0:00:00.046) 0:03:55.108 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nMonday 25 June 2018 06:08:33 -0400 (0:00:00.044) 0:03:55.153 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nMonday 25 June 2018 06:08:33 -0400 (0:00:00.043) 0:03:55.197 *********** \nok: [compute-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [compute-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nMonday 25 June 2018 06:08:34 -0400 (0:00:00.997) 0:03:56.194 *********** \nok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nMonday 25 June 2018 06:08:34 -0400 (0:00:00.082) 0:03:56.277 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nMonday 25 June 2018 06:08:34 -0400 (0:00:00.044) 0:03:56.322 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026815\", \"end\": \"2018-06-25 10:08:35.578523\", \"rc\": 0, \"start\": \"2018-06-25 10:08:35.551708\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nMonday 25 June 2018 06:08:35 -0400 (0:00:00.543) 0:03:56.866 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nMonday 25 June 2018 06:08:35 -0400 (0:00:00.069) 0:03:56.936 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-compute-0\"], \"delta\": \"0:00:00.027011\", \"end\": \"2018-06-25 10:08:36.182188\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:08:36.155177\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.539) 0:03:57.475 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.050) 0:03:57.525 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.059) 0:03:57.585 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.047) 0:03:57.633 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.052) 0:03:57.685 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.049) 0:03:57.734 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.053) 0:03:57.787 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.043) 0:03:57.830 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.052) 0:03:57.883 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.048) 0:03:57.932 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.051) 0:03:57.983 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.049) 0:03:58.033 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.046) 0:03:58.080 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.046) 0:03:58.126 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.053) 0:03:58.180 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.048) 0:03:58.228 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.046) 0:03:58.275 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.044) 0:03:58.320 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nMonday 25 June 2018 06:08:36 -0400 (0:00:00.046) 0:03:58.367 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.046) 0:03:58.413 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.053) 0:03:58.466 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:58.512 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.044) 0:03:58.557 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.042) 0:03:58.600 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.043) 0:03:58.644 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.043) 0:03:58.687 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.054) 0:03:58.741 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.048) 0:03:58.790 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:58.835 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:58.881 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:58.926 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.046) 0:03:58.973 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.051) 0:03:59.024 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.042) 0:03:59.067 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.043) 0:03:59.110 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nMonday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:59.156 *********** \nok: [compute-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.612840\", \"end\": \"2018-06-25 10:08:55.118463\", \"rc\": 0, \"start\": \"2018-06-25 10:08:38.505623\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nMonday 25 June 2018 06:08:55 -0400 (0:00:17.263) 0:04:16.419 *********** \nchanged: [compute-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.032251\", \"end\": \"2018-06-25 10:08:55.698144\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:08:55.665893\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/05438dd2dbf4147ffea1c9670683cc73289a1beb5e335366ceae49581e9db966/diff:/var/lib/docker/overlay2/27250b41116c2d1bd1da05a6caaf5ee1d1219a89b4bc38979fd90c93eff8c02e/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/05438dd2dbf4147ffea1c9670683cc73289a1beb5e335366ceae49581e9db966/diff:/var/lib/docker/overlay2/27250b41116c2d1bd1da05a6caaf5ee1d1219a89b4bc38979fd90c93eff8c02e/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nMonday 25 June 2018 06:08:55 -0400 (0:00:00.586) 0:04:17.006 *********** \nok: [compute-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nMonday 25 June 2018 06:08:55 -0400 (0:00:00.083) 0:04:17.089 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nMonday 25 June 2018 06:08:55 -0400 (0:00:00.047) 0:04:17.136 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nMonday 25 June 2018 06:08:55 -0400 (0:00:00.046) 0:04:17.183 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nMonday 25 June 2018 06:08:55 -0400 (0:00:00.047) 0:04:17.230 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nMonday 25 June 2018 06:08:55 -0400 (0:00:00.054) 0:04:17.285 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nMonday 25 June 2018 06:08:55 -0400 (0:00:00.047) 0:04:17.332 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nMonday 25 June 2018 06:08:55 -0400 (0:00:00.048) 0:04:17.381 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nMonday 25 June 2018 06:08:56 -0400 (0:00:00.053) 0:04:17.435 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nMonday 25 June 2018 06:08:56 -0400 (0:00:00.050) 0:04:17.486 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nMonday 25 June 2018 06:08:56 -0400 (0:00:00.047) 0:04:17.533 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nMonday 25 June 2018 06:08:56 -0400 (0:00:00.053) 0:04:17.586 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nMonday 25 June 2018 06:08:56 -0400 (0:00:00.047) 0:04:17.634 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.573659\", \"end\": \"2018-06-25 10:08:57.422010\", \"rc\": 0, \"start\": \"2018-06-25 10:08:56.848351\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nMonday 25 June 2018 06:08:57 -0400 (0:00:01.080) 0:04:18.715 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nMonday 25 June 2018 06:08:57 -0400 (0:00:00.079) 0:04:18.794 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nMonday 25 June 2018 06:08:57 -0400 (0:00:00.047) 0:04:18.842 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nMonday 25 June 2018 06:08:57 -0400 (0:00:00.043) 0:04:18.885 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nMonday 25 June 2018 06:08:57 -0400 (0:00:00.073) 0:04:18.959 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nMonday 25 June 2018 06:08:57 -0400 (0:00:00.051) 0:04:19.010 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nMonday 25 June 2018 06:08:57 -0400 (0:00:00.048) 0:04:19.058 *********** \nchanged: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nMonday 25 June 2018 06:09:00 -0400 (0:00:02.538) 0:04:21.597 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nMonday 25 June 2018 06:09:00 -0400 (0:00:00.043) 0:04:21.640 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nMonday 25 June 2018 06:09:00 -0400 (0:00:00.043) 0:04:21.683 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nMonday 25 June 2018 06:09:00 -0400 (0:00:00.051) 0:04:21.735 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nMonday 25 June 2018 06:09:00 -0400 (0:00:00.043) 0:04:21.779 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nMonday 25 June 2018 06:09:00 -0400 (0:00:00.042) 0:04:21.821 *********** \nchanged: [compute-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nMonday 25 June 2018 06:09:00 -0400 (0:00:00.533) 0:04:22.354 *********** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for compute-0\nchanged: [compute-0] => {\"changed\": true, \"checksum\": \"743848637000cc874025cc6ea8e3f15a09c4d9b7\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"70f96443c5883f06f4a1fd0921fced2c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 978, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921341.01-55421071371481/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nMonday 25 June 2018 06:09:04 -0400 (0:00:03.202) 0:04:25.557 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : copy ceph admin keyring when non containerized deployment] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml:2\nMonday 25 June 2018 06:09:04 -0400 (0:00:00.044) 0:04:25.602 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : set_fact keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:2\nMonday 25 June 2018 06:09:04 -0400 (0:00:00.040) 0:04:25.642 *********** \nok: [compute-0] => (item={u'mon_cap': u'allow r', u'name': u'client.openstack', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r\", \"name\": \"client.openstack\", \"osd_cap\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}}\nok: [compute-0] => (item={u'mon_cap': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', u'osd_cap': u'allow rw'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mds_cap\": \"allow *\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"name\": \"client.manila\", \"osd_cap\": \"allow rw\"}}\nok: [compute-0] => (item={u'mon_cap': u'allow rw', u'name': u'client.radosgw', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', u'osd_cap': u'allow rwx'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow rw\", \"name\": \"client.radosgw\", \"osd_cap\": \"allow rwx\"}}\n\nTASK [ceph-client : set_fact keys - override keys_tmp with keys] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:9\nMonday 25 June 2018 06:09:04 -0400 (0:00:00.113) 0:04:25.756 *********** \nok: [compute-0] => {\"ansible_facts\": {\"keys\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false}\n\nTASK [ceph-client : run a dummy container (sleep 300) from where we can create pool(s)/key(s)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:15\nMonday 25 June 2018 06:09:04 -0400 (0:00:00.074) 0:04:25.830 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"-d\", \"-v\", \"/etc/ceph:/etc/ceph:z\", \"--name\", \"ceph-create-keys\", \"--entrypoint=sleep\", \"192.168.24.1:8787/rhceph:3-6\", \"300\"], \"delta\": \"0:00:00.319928\", \"end\": \"2018-06-25 10:09:05.411868\", \"rc\": 0, \"start\": \"2018-06-25 10:09:05.091940\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"9de5dd85f987537801e8d6bf9e2202567ee5544ec635e840fe241fe5d8247841\", \"stdout_lines\": [\"9de5dd85f987537801e8d6bf9e2202567ee5544ec635e840fe241fe5d8247841\"]}\n\nTASK [ceph-client : set_fact delegated_node] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:30\nMonday 25 June 2018 06:09:05 -0400 (0:00:00.878) 0:04:26.709 *********** \nok: [compute-0] => {\"ansible_facts\": {\"delegated_node\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-client : set_fact condition_copy_admin_key] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:34\nMonday 25 June 2018 06:09:05 -0400 (0:00:00.075) 0:04:26.785 *********** \nok: [compute-0] => {\"ansible_facts\": {\"condition_copy_admin_key\": true}, \"changed\": false}\n\nTASK [ceph-client : set_fact docker_exec_cmd] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:38\nMonday 25 June 2018 06:09:05 -0400 (0:00:00.074) 0:04:26.859 *********** \nok: [compute-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0 \"}, \"changed\": false}\n\nTASK [ceph-client : create cephx key(s)] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:44\nMonday 25 June 2018 06:09:05 -0400 (0:00:00.137) 0:04:26.996 *********** \nchanged: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mon': u\"'allow r'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.openstack.keyring\", \"--name\", \"client.openstack\", \"--add-key\", \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"--cap\", \"mds\", \"''\", \"--cap\", \"osd\", \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow r'\"], \"delta\": \"0:00:00.152629\", \"end\": \"2018-06-25 10:09:06.393746\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-06-25 10:09:06.241117\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.openstack.keyring\\nadded entity client.openstack auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.openstack.keyring\", \"added entity client.openstack auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw== with 0 caps)\"]}\nchanged: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\'', 'mgr': u\"'allow *'\"}, 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'mode': u'0600'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.manila.keyring\", \"--name\", \"client.manila\", \"--add-key\", \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"--cap\", \"mds\", \"'allow *'\", \"--cap\", \"osd\", \"'allow rw'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\"], \"delta\": \"0:00:00.153805\", \"end\": \"2018-06-25 10:09:07.009700\", \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-06-25 10:09:06.855895\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.manila.keyring\\nadded entity client.manila auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.manila.keyring\", \"added entity client.manila auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ== with 0 caps)\"]}\nchanged: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mon': u\"'allow rw'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.radosgw.keyring\", \"--name\", \"client.radosgw\", \"--add-key\", \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"--cap\", \"mds\", \"''\", \"--cap\", \"osd\", \"'allow rwx'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow rw'\"], \"delta\": \"0:00:00.150642\", \"end\": \"2018-06-25 10:09:07.621496\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-06-25 10:09:07.470854\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.radosgw.keyring\\nadded entity client.radosgw auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.radosgw.keyring\", \"added entity client.radosgw auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg== with 0 caps)\"]}\n\nTASK [ceph-client : slurp client cephx key(s)] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:62\nMonday 25 June 2018 06:09:07 -0400 (0:00:01.948) 0:04:28.945 *********** \nok: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mon': u\"'allow r'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'name': u'client.openstack'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBZHpNQW44R2pObmtwMEdoNWJTOElNdz09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}\nok: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\'', 'mgr': u\"'allow *'\"}, 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'mode': u'0600'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBSDJvM2wxL0JLU0VHVVRVR3B0OEZIUT09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}\nok: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mon': u\"'allow rw'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'name': u'client.radosgw'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDbEpTMWJBQUFBQUJBQVJCUEJLZ1pseGh4SXJ6RlM5RnVlUmc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}\n\nTASK [ceph-client : list existing pool(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:74\nMonday 25 June 2018 06:09:08 -0400 (0:00:01.398) 0:04:30.343 *********** \n\nTASK [ceph-client : create ceph pool(s)] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:86\nMonday 25 June 2018 06:09:08 -0400 (0:00:00.043) 0:04:30.386 *********** \n\nTASK [ceph-client : kill a dummy container that created pool(s)/key(s)] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:109\nMonday 25 June 2018 06:09:09 -0400 (0:00:00.049) 0:04:30.436 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"rm\", \"-f\", \"ceph-create-keys\"], \"delta\": \"0:00:00.149063\", \"end\": \"2018-06-25 10:09:09.842587\", \"rc\": 0, \"start\": \"2018-06-25 10:09:09.693524\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph-create-keys\", \"stdout_lines\": [\"ceph-create-keys\"]}\n\nTASK [ceph-client : get client cephx keys] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:116\nMonday 25 June 2018 06:09:09 -0400 (0:00:00.699) 0:04:31.135 *********** \nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBZHpNQW44R2pObmtwMEdoNWJTOElNdz09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.openstack.keyring', 'item': {'mode': u'0600', 'name': u'client.openstack', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mgr': u\"'allow *'\", 'mon': u\"'allow r'\"}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.openstack.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"e8c02c06312fda4c2590d332d52c324f8cc7ee59\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBZHpNQW44R2pObmtwMEdoNWJTOElNdz09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.openstack.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}, \"md5sum\": \"3320618afc06e58268928352e4f18a11\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 307, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921349.83-43059450388249/source\", \"state\": \"file\", \"uid\": 167}\nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBSDJvM2wxL0JLU0VHVVRVR3B0OEZIUT09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=', 'failed': False, u'source': u'/etc/ceph/ceph.client.manila.keyring', 'item': {'name': u'client.manila', 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mgr': u\"'allow *'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\''}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.manila.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"21419c0962bd0ff32415ed84be15002db21af2d5\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBSDJvM2wxL0JLU0VHVVRVR3B0OEZIUT09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.manila.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}, \"md5sum\": \"101079d72332a3a117d5dae184f86fd7\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 284, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921352.44-130021033914542/source\", \"state\": \"file\", \"uid\": 167}\nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDbEpTMWJBQUFBQUJBQVJCUEJLZ1pseGh4SXJ6RlM5RnVlUmc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.radosgw.keyring', 'item': {'mode': u'0600', 'name': u'client.radosgw', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mgr': u\"'allow *'\", 'mon': u\"'allow rw'\"}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.radosgw.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"f731a8adae069c294e3019338ac5e8a6703c7065\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDbEpTMWJBQUFBQUJBQVJCUEJLZ1pseGh4SXJ6RlM5RnVlUmc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.radosgw.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}, \"md5sum\": \"494a95924e2c9bb292694df2b83f8e2b\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 157, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921355.07-31406371722383/source\", \"state\": \"file\", \"uid\": 167}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nMonday 25 June 2018 06:09:17 -0400 (0:00:07.855) 0:04:38.991 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nMonday 25 June 2018 06:09:17 -0400 (0:00:00.065) 0:04:39.056 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nMonday 25 June 2018 06:09:17 -0400 (0:00:00.041) 0:04:39.098 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nMonday 25 June 2018 06:09:17 -0400 (0:00:00.075) 0:04:39.174 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nMonday 25 June 2018 06:09:17 -0400 (0:00:00.074) 0:04:39.249 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nMonday 25 June 2018 06:09:17 -0400 (0:00:00.068) 0:04:39.317 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nMonday 25 June 2018 06:09:17 -0400 (0:00:00.066) 0:04:39.383 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.045) 0:04:39.428 *********** \nskipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.071) 0:04:39.500 *********** \nskipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.074) 0:04:39.575 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.064) 0:04:39.639 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.065) 0:04:39.705 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.040) 0:04:39.745 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.052) 0:04:39.797 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.052) 0:04:39.850 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.062) 0:04:39.912 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.062) 0:04:39.975 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.040) 0:04:40.015 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.047) 0:04:40.063 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.047) 0:04:40.110 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.063) 0:04:40.174 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.064) 0:04:40.238 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.041) 0:04:40.279 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.049) 0:04:40.328 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nMonday 25 June 2018 06:09:18 -0400 (0:00:00.047) 0:04:40.376 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nMonday 25 June 2018 06:09:19 -0400 (0:00:00.062) 0:04:40.438 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nMonday 25 June 2018 06:09:19 -0400 (0:00:00.065) 0:04:40.503 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nMonday 25 June 2018 06:09:19 -0400 (0:00:00.038) 0:04:40.542 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nMonday 25 June 2018 06:09:19 -0400 (0:00:00.079) 0:04:40.622 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nMonday 25 June 2018 06:09:19 -0400 (0:00:00.076) 0:04:40.698 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph client install 'Complete'] **************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:324\nMonday 25 June 2018 06:09:19 -0400 (0:00:00.087) 0:04:40.786 *********** \nok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"end\": \"20180625060919Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=88 changed=18 unreachable=0 failed=0 \ncompute-0 : ok=57 changed=7 unreachable=0 failed=0 \ncontroller-0 : ok=119 changed=20 unreachable=0 failed=0 \n\n\nINSTALLER STATUS ***************************************************************\nInstall Ceph Monitor : Complete (0:01:09)\nInstall Ceph Manager : Complete (0:00:40)\nInstall Ceph OSD : Complete (0:01:45)\nInstall Ceph Client : Complete (0:00:56)\n\nMonday 25 June 2018 06:09:19 -0400 (0:00:00.056) 0:04:40.842 *********** \n=============================================================================== \nceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 17.88s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----\nceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 17.38s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----\nceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 17.26s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----\ngather and delegate facts ----------------------------------------------- 9.29s\n/usr/share/ceph-ansible/site-docker.yml.sample:29 -----------------------------\nceph-client : get client cephx keys ------------------------------------- 7.86s\n/usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:116 -----\nceph-osd : create openstack pool(s) ------------------------------------- 7.55s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21 ----------\nceph-osd : prepare ceph containerized osd disk collocated --------------- 7.48s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5 -------\nceph-osd : assign application to pool(s) -------------------------------- 5.99s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:41 ----------\nceph-osd : copy to other mons the openstack cephx key(s) ---------------- 5.91s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:71 ----------\nceph-defaults : create ceph initial directories ------------------------- 5.76s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 \nceph-defaults : create ceph initial directories ------------------------- 5.68s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 \nceph-defaults : create ceph initial directories ------------------------- 5.49s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 \nceph-defaults : create ceph initial directories ------------------------- 4.99s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 \nceph-osd : list existing pool(s) ---------------------------------------- 4.34s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12 ----------\nceph-osd : create openstack cephx key(s) -------------------------------- 4.34s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:50 ----------\nceph-config : generate ceph.conf configuration file --------------------- 3.41s\n/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------\nceph-config : generate ceph.conf configuration file --------------------- 3.20s\n/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------\nceph-mon : push ceph files to the ansible server ------------------------ 3.03s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2 -------\nceph-config : generate ceph.conf configuration file --------------------- 3.03s\n/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------\nceph-mgr : generate systemd unit file ----------------------------------- 2.98s\n/usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2 ----", "stdout_lines": ["ansible-playbook 2.5.4", " config file = /usr/share/ceph-ansible/ansible.cfg", " configured module search path = [u'/usr/share/ceph-ansible/library']", " ansible python module location = /usr/lib/python2.7/site-packages/ansible", " executable location = /usr/bin/ansible-playbook", " python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]", "Using /usr/share/ceph-ansible/ansible.cfg as config file", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml", "", "PLAYBOOK: site-docker.yml.sample ***********************************************", "12 plays in /usr/share/ceph-ansible/site-docker.yml.sample", "", "PLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***", "", "TASK [gather facts] ************************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:24", "Monday 25 June 2018 06:04:38 -0400 (0:00:00.185) 0:00:00.185 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [gather and delegate facts] ***********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:29", "Monday 25 June 2018 06:04:38 -0400 (0:00:00.079) 0:00:00.265 *********** ", "ok: [controller-0 -> 192.168.24.13] => (item=compute-0)", "ok: [controller-0 -> 192.168.24.14] => (item=controller-0)", "ok: [controller-0 -> 192.168.24.16] => (item=ceph-0)", "", "TASK [check if it is atomic host] **********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:38", "Monday 25 June 2018 06:04:48 -0400 (0:00:09.286) 0:00:09.551 *********** ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [set_fact is_atomic] ******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:45", "Monday 25 June 2018 06:04:48 -0400 (0:00:00.735) 0:00:10.287 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "TASK [pull rhceph image] *******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:66", "Monday 25 June 2018 06:04:49 -0400 (0:00:00.262) 0:00:10.549 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:76", "Monday 25 June 2018 06:04:49 -0400 (0:00:00.191) 0:00:10.741 *********** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180625060449Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Monday 25 June 2018 06:04:49 -0400 (0:00:00.162) 0:00:10.903 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029864\", \"end\": \"2018-06-25 10:04:50.267566\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:04:50.237702\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Monday 25 June 2018 06:04:50 -0400 (0:00:00.662) 0:00:11.566 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Monday 25 June 2018 06:04:50 -0400 (0:00:00.051) 0:00:11.617 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Monday 25 June 2018 06:04:50 -0400 (0:00:00.047) 0:00:11.664 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Monday 25 June 2018 06:04:50 -0400 (0:00:00.050) 0:00:11.715 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.029060\", \"end\": \"2018-06-25 10:04:50.971368\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:04:50.942308\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Monday 25 June 2018 06:04:50 -0400 (0:00:00.554) 0:00:12.269 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Monday 25 June 2018 06:04:50 -0400 (0:00:00.047) 0:00:12.317 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Monday 25 June 2018 06:04:50 -0400 (0:00:00.048) 0:00:12.365 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.054) 0:00:12.420 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.046) 0:00:12.467 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.046) 0:00:12.513 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.045) 0:00:12.559 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.046) 0:00:12.605 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.047) 0:00:12.653 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.046) 0:00:12.699 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:12.744 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:12.788 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.043) 0:00:12.831 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:12.876 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.051) 0:00:12.928 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:12.972 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.045) 0:00:13.018 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.043) 0:00:13.061 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:13.106 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.044) 0:00:13.151 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.045) 0:00:13.196 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.042) 0:00:13.238 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.045) 0:00:13.284 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Monday 25 June 2018 06:04:51 -0400 (0:00:00.042) 0:00:13.326 *********** ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Monday 25 June 2018 06:04:52 -0400 (0:00:00.540) 0:00:13.867 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Monday 25 June 2018 06:04:52 -0400 (0:00:00.076) 0:00:13.944 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Monday 25 June 2018 06:04:52 -0400 (0:00:00.083) 0:00:14.027 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Monday 25 June 2018 06:04:52 -0400 (0:00:00.068) 0:00:14.096 *********** ", "ok: [controller-0 -> 192.168.24.14] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Monday 25 June 2018 06:04:52 -0400 (0:00:00.145) 0:00:14.242 *********** ", "ok: [controller-0 -> 192.168.24.14] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.037324\", \"end\": \"2018-06-25 10:04:53.520287\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-06-25 10:04:53.482963\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check if /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Monday 25 June 2018 06:04:53 -0400 (0:00:00.579) 0:00:14.821 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Monday 25 June 2018 06:04:53 -0400 (0:00:00.194) 0:00:15.016 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Monday 25 June 2018 06:04:53 -0400 (0:00:00.050) 0:00:15.067 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.421) 0:00:15.488 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.053) 0:00:15.541 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.080) 0:00:15.622 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.054) 0:00:15.677 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.053) 0:00:15.730 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.043) 0:00:15.773 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.043) 0:00:15.816 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.190) 0:00:16.007 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.129) 0:00:16.137 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.077) 0:00:16.214 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.071) 0:00:16.285 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Monday 25 June 2018 06:04:54 -0400 (0:00:00.069) 0:00:16.354 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Monday 25 June 2018 06:04:55 -0400 (0:00:00.047) 0:00:16.402 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Monday 25 June 2018 06:04:55 -0400 (0:00:00.047) 0:00:16.449 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Monday 25 June 2018 06:04:55 -0400 (0:00:00.043) 0:00:16.492 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Monday 25 June 2018 06:04:55 -0400 (0:00:00.044) 0:00:16.536 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Monday 25 June 2018 06:04:55 -0400 (0:00:00.042) 0:00:16.579 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Monday 25 June 2018 06:04:55 -0400 (0:00:00.048) 0:00:16.628 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Monday 25 June 2018 06:04:55 -0400 (0:00:00.046) 0:00:16.674 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Monday 25 June 2018 06:04:55 -0400 (0:00:00.076) 0:00:16.751 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Monday 25 June 2018 06:04:55 -0400 (0:00:00.065) 0:00:16.816 *********** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Monday 25 June 2018 06:05:01 -0400 (0:00:05.684) 0:00:22.500 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Monday 25 June 2018 06:05:01 -0400 (0:00:00.045) 0:00:22.546 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Monday 25 June 2018 06:05:01 -0400 (0:00:00.055) 0:00:22.601 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Monday 25 June 2018 06:05:01 -0400 (0:00:00.043) 0:00:22.644 *********** ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Monday 25 June 2018 06:05:02 -0400 (0:00:00.984) 0:00:23.629 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Monday 25 June 2018 06:05:02 -0400 (0:00:00.082) 0:00:23.711 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Monday 25 June 2018 06:05:02 -0400 (0:00:00.041) 0:00:23.753 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026626\", \"end\": \"2018-06-25 10:05:02.989125\", \"rc\": 0, \"start\": \"2018-06-25 10:05:02.962499\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Monday 25 June 2018 06:05:02 -0400 (0:00:00.531) 0:00:24.284 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Monday 25 June 2018 06:05:02 -0400 (0:00:00.070) 0:00:24.355 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029066\", \"end\": \"2018-06-25 10:05:03.620571\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:05:03.591505\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Monday 25 June 2018 06:05:03 -0400 (0:00:00.561) 0:00:24.916 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Monday 25 June 2018 06:05:03 -0400 (0:00:00.088) 0:00:25.005 *********** ", "ok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Monday 25 June 2018 06:05:03 -0400 (0:00:00.123) 0:00:25.128 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Monday 25 June 2018 06:05:03 -0400 (0:00:00.087) 0:00:25.216 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Monday 25 June 2018 06:05:03 -0400 (0:00:00.091) 0:00:25.308 *********** ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Monday 25 June 2018 06:05:05 -0400 (0:00:01.284) 0:00:26.592 *********** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.245) 0:00:26.838 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.039) 0:00:26.878 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.040) 0:00:26.919 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.046) 0:00:26.965 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.045) 0:00:27.010 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.049) 0:00:27.060 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.040) 0:00:27.101 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.040) 0:00:27.142 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.043) 0:00:27.185 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.047) 0:00:27.233 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.042) 0:00:27.275 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.041) 0:00:27.316 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Monday 25 June 2018 06:05:05 -0400 (0:00:00.042) 0:00:27.359 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.053) 0:00:27.412 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.044) 0:00:27.456 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.041) 0:00:27.498 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.053) 0:00:27.551 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.043) 0:00:27.594 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.045) 0:00:27.640 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.040) 0:00:27.680 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.043) 0:00:27.724 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.044) 0:00:27.769 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.041) 0:00:27.810 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.045) 0:00:27.855 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.041) 0:00:27.897 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.042) 0:00:27.939 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.045) 0:00:27.984 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.127) 0:00:28.112 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.045) 0:00:28.157 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Monday 25 June 2018 06:05:06 -0400 (0:00:00.047) 0:00:28.204 *********** ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.834319\", \"end\": \"2018-06-25 10:05:24.277992\", \"rc\": 0, \"start\": \"2018-06-25 10:05:07.443673\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Monday 25 June 2018 06:05:24 -0400 (0:00:17.379) 0:00:45.583 *********** ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.036651\", \"end\": \"2018-06-25 10:05:24.877314\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:05:24.840663\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Monday 25 June 2018 06:05:24 -0400 (0:00:00.609) 0:00:46.193 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Monday 25 June 2018 06:05:24 -0400 (0:00:00.069) 0:00:46.263 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Monday 25 June 2018 06:05:24 -0400 (0:00:00.048) 0:00:46.311 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Monday 25 June 2018 06:05:24 -0400 (0:00:00.045) 0:00:46.357 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Monday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.401 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Monday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.446 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Monday 25 June 2018 06:05:25 -0400 (0:00:00.047) 0:00:46.493 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Monday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.537 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Monday 25 June 2018 06:05:25 -0400 (0:00:00.046) 0:00:46.584 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Monday 25 June 2018 06:05:25 -0400 (0:00:00.047) 0:00:46.632 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Monday 25 June 2018 06:05:25 -0400 (0:00:00.050) 0:00:46.682 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Monday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.727 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Monday 25 June 2018 06:05:25 -0400 (0:00:00.044) 0:00:46.772 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.617837\", \"end\": \"2018-06-25 10:05:26.655884\", \"rc\": 0, \"start\": \"2018-06-25 10:05:26.038047\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Monday 25 June 2018 06:05:26 -0400 (0:00:01.180) 0:00:47.952 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Monday 25 June 2018 06:05:26 -0400 (0:00:00.069) 0:00:48.022 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Monday 25 June 2018 06:05:26 -0400 (0:00:00.044) 0:00:48.067 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Monday 25 June 2018 06:05:26 -0400 (0:00:00.046) 0:00:48.114 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Monday 25 June 2018 06:05:26 -0400 (0:00:00.072) 0:00:48.186 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Monday 25 June 2018 06:05:26 -0400 (0:00:00.051) 0:00:48.238 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Monday 25 June 2018 06:05:26 -0400 (0:00:00.043) 0:00:48.282 *********** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Monday 25 June 2018 06:05:29 -0400 (0:00:02.454) 0:00:50.736 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Monday 25 June 2018 06:05:29 -0400 (0:00:00.049) 0:00:50.785 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Monday 25 June 2018 06:05:29 -0400 (0:00:00.047) 0:00:50.833 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Monday 25 June 2018 06:05:29 -0400 (0:00:00.197) 0:00:51.031 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Monday 25 June 2018 06:05:29 -0400 (0:00:00.046) 0:00:51.077 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Monday 25 June 2018 06:05:29 -0400 (0:00:00.041) 0:00:51.119 *********** ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Monday 25 June 2018 06:05:30 -0400 (0:00:00.537) 0:00:51.657 *********** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"677880bddaa262c511eb635c230f19e6a4ddfabe\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"13cb0c834a94e4916365ae02ba1fbe9e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 761, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921130.31-23484003935514/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Monday 25 June 2018 06:05:33 -0400 (0:00:03.413) 0:00:55.070 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2", "Monday 25 June 2018 06:05:33 -0400 (0:00:00.044) 0:00:55.115 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2", "Monday 25 June 2018 06:05:33 -0400 (0:00:00.070) 0:00:55.186 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate monitor initial keyring] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2", "Monday 25 June 2018 06:05:33 -0400 (0:00:00.052) 0:00:55.238 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : read monitor initial keyring if it already exists] ************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11", "Monday 25 June 2018 06:05:33 -0400 (0:00:00.048) 0:00:55.287 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create monitor initial keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22", "Monday 25 June 2018 06:05:33 -0400 (0:00:00.047) 0:00:55.334 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set initial monitor key permissions] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34", "Monday 25 June 2018 06:05:33 -0400 (0:00:00.048) 0:00:55.383 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create (and fix ownership of) monitor directory] **************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.052) 0:00:55.436 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.046) 0:00:55.482 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.045) 0:00:55.528 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create custom admin keyring] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.043) 0:00:55.571 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set ownership of admin keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.041) 0:00:55.613 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : import admin keyring into mon keyring] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.047) 0:00:55.661 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs with keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.042) 0:00:55.703 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs without keyring] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.042) 0:00:55.745 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.041) 0:00:55.787 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add ceph-mon systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.054) 0:00:55.841 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : start the monitor service] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.050) 0:00:55.891 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : enable the ceph-mon.target service] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.052) 0:00:55.944 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : include ceph_keys.yml] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.045) 0:00:55.990 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : collect all the pools] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.041) 0:00:56.032 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : secure the cluster] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.038) 0:00:56.070 *********** ", "", "TASK [ceph-mon : set_fact ceph_config_keys] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.044) 0:00:56.115 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : register rbd bootstrap key] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11", "Monday 25 June 2018 06:05:34 -0400 (0:00:00.180) 0:00:56.295 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17", "Monday 25 June 2018 06:05:35 -0400 (0:00:00.169) 0:00:56.465 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : stat for ceph config and keys] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22", "Monday 25 June 2018 06:05:35 -0400 (0:00:00.082) 0:00:56.548 *********** ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-mon : try to copy ceph keys] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33", "Monday 25 June 2018 06:05:36 -0400 (0:00:00.872) 0:00:57.421 *********** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with default ceph.conf] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2", "Monday 25 June 2018 06:05:36 -0400 (0:00:00.124) 0:00:57.546 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with custom ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18", "Monday 25 June 2018 06:05:36 -0400 (0:00:00.047) 0:00:57.594 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : delete populate-kv-store docker] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36", "Monday 25 June 2018 06:05:36 -0400 (0:00:00.047) 0:00:57.641 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43", "Monday 25 June 2018 06:05:36 -0400 (0:00:00.038) 0:00:57.680 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"8e2b129de045a7f1e572d1bfdd590c11edd51013\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"be7a3a8e4b79b94c82d572bbdfe17fb0\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 835, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921136.42-87207207447687/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : systemd start mon container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54", "Monday 25 June 2018 06:05:39 -0400 (0:00:02.964) 0:01:00.644 *********** ", "ok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmon.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.10 -e CLUSTER=ceph -e FSID=78ace352-763a-11e8-9c1d-525400166144 -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mon : configure ceph profile.d aliases] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2", "Monday 25 June 2018 06:05:40 -0400 (0:00:00.998) 0:01:01.642 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921140.4-40407098418836/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : wait for monitor socket to exist] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12", "Monday 25 June 2018 06:05:43 -0400 (0:00:02.860) 0:01:04.503 *********** ", "changed: [controller-0] => {\"attempts\": 1, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.106485\", \"end\": \"2018-06-25 10:05:43.830371\", \"rc\": 0, \"start\": \"2018-06-25 10:05:43.723886\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 376176 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-06-25 10:05:41.570204432 +0000\\nModify: 2018-06-25 10:05:41.570204432 +0000\\nChange: 2018-06-25 10:05:41.570204432 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 376176 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-06-25 10:05:41.570204432 +0000\", \"Modify: 2018-06-25 10:05:41.570204432 +0000\", \"Change: 2018-06-25 10:05:41.570204432 +0000\", \" Birth: -\"]}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19", "Monday 25 June 2018 06:05:43 -0400 (0:00:00.634) 0:01:05.137 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29", "Monday 25 June 2018 06:05:43 -0400 (0:00:00.086) 0:01:05.223 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39", "Monday 25 June 2018 06:05:43 -0400 (0:00:00.092) 0:01:05.316 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.10\"], \"delta\": \"0:00:00.209401\", \"end\": \"2018-06-25 10:05:44.855253\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:05:44.645852\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49", "Monday 25 June 2018 06:05:44 -0400 (0:00:00.837) 0:01:06.153 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59", "Monday 25 June 2018 06:05:44 -0400 (0:00:00.047) 0:01:06.201 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69", "Monday 25 June 2018 06:05:44 -0400 (0:00:00.044) 0:01:06.246 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : push ceph files to the ansible server] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2", "Monday 25 June 2018 06:05:44 -0400 (0:00:00.049) 0:01:06.295 *********** ", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"87f8e20ff9c54bcb76bf97228cb0ba705b439784\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"fe671f5606d3379d4ccf3ddb240723ce\", \"remote_checksum\": \"87f8e20ff9c54bcb76bf97228cb0ba705b439784\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"ae4c70255ca42eb77eacd1cf1db0492ada8c18ae\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"ec178c2da843c050d2e2ac237ae701a5\", \"remote_checksum\": \"ae4c70255ca42eb77eacd1cf1db0492ada8c18ae\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"2f594fd27d9a2938d207fd0e4dcd1fdb\", \"remote_checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"381a02ebfa1216a2a279ae665eeaebd1ce6de5f5\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"de60d3b20fec15a075e9f0d39a69d366\", \"remote_checksum\": \"381a02ebfa1216a2a279ae665eeaebd1ce6de5f5\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"3540de06c3ed498809bdddd6a350cae592455923\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"dc685a0d2a335b9e52bb10344037e6ac\", \"remote_checksum\": \"3540de06c3ed498809bdddd6a350cae592455923\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"c3545cb2f74ad0b3c3491481b9215a04221dc20f\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"e0dfc3e5a328b94796559ffa871a90e6\", \"remote_checksum\": \"c3545cb2f74ad0b3c3491481b9215a04221dc20f\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84", "Monday 25 June 2018 06:05:47 -0400 (0:00:03.032) 0:01:09.328 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97", "Monday 25 June 2018 06:05:47 -0400 (0:00:00.044) 0:01:09.372 *********** ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.385673\", \"end\": \"2018-06-25 10:05:49.085002\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-06-25 10:05:48.699329\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-mon : stat for ceph mgr key(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109", "Monday 25 June 2018 06:05:48 -0400 (0:00:01.009) 0:01:10.381 *********** ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1529921148.958226, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"ctime\": 1529921149.0672264, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 7654767, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1529921149.0672264, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744072441155520\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-mon : fetch ceph mgr key(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121", "Monday 25 June 2018 06:05:49 -0400 (0:00:00.602) 0:01:10.984 *********** ", "changed: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921149.0672264, u'block_size': 4096, u'inode': 7654767, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'charset': u'us-ascii', u'readable': True, u'version': u'18446744072441155520', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1529921148.958226, u'mimetype': u'text/plain', u'ctime': 1529921149.0672264, u'isblk': False, u'xgrp': False, u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'checksum': u'dce8b853b5430d214621f9e0ba7d2feebbb2a1a5', u'islnk': False, u'attributes': []}, u'changed': False, '_ansible_no_log': False, 'item': u'controller-0', '_ansible_item_result': True, 'failed': False, u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1529921148.958226, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"ctime\": 1529921149.0672264, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 7654767, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1529921149.0672264, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744072441155520\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"46173b1f477ccec40e6961621fd8c750\", \"remote_checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : configure crush hierarchy] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.572) 0:01:11.557 *********** ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create configured crush rules] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.044) 0:01:11.601 *********** ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get id for new default crush rule] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.056) 0:01:11.657 *********** ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.055) 0:01:11.713 *********** ", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.055) 0:01:11.769 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.075) 0:01:11.844 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add new default crush rule to ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.077) 0:01:11.921 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.048) 0:01:11.970 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.053) 0:01:12.024 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.042) 0:01:12.067 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.041) 0:01:12.108 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}", "", "TASK [ceph-mon : increase calamari logging level when debug is on] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:9", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.069) 0:01:12.177 *********** ", "skipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : initialize the calamari server api] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:20", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.048) 0:01:12.225 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.015) 0:01:12.241 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Monday 25 June 2018 06:05:50 -0400 (0:00:00.063) 0:01:12.304 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"a16eea5d614de2b10079cb91a04686e919ccc201\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"b59e1abae52d61eb05b9ff080771a551\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 1173, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921150.98-47213257884394/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Monday 25 June 2018 06:05:53 -0400 (0:00:02.730) 0:01:15.034 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Monday 25 June 2018 06:05:53 -0400 (0:00:00.082) 0:01:15.117 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Monday 25 June 2018 06:05:53 -0400 (0:00:00.122) 0:01:15.239 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.166) 0:01:15.406 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.172) 0:01:15.579 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.045) 0:01:15.625 *********** ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.074) 0:01:15.699 *********** ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.073) 0:01:15.772 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.160) 0:01:15.933 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.164) 0:01:16.097 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.043) 0:01:16.140 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.053) 0:01:16.194 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Monday 25 June 2018 06:05:54 -0400 (0:00:00.053) 0:01:16.247 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.158) 0:01:16.406 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.154) 0:01:16.560 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.044) 0:01:16.605 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.053) 0:01:16.659 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.049) 0:01:16.708 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.151) 0:01:16.860 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.132) 0:01:16.993 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.043) 0:01:17.036 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.050) 0:01:17.086 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.050) 0:01:17.137 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.060) 0:01:17.197 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Monday 25 June 2018 06:05:55 -0400 (0:00:00.065) 0:01:17.262 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"f36b3460f6762a853a3dab1958afb7d83ff8f234\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"9d50588dc55f43284b00033b8b30edc3\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 570, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921155.93-172993666328884/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Monday 25 June 2018 06:05:58 -0400 (0:00:02.645) 0:01:19.908 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Monday 25 June 2018 06:05:58 -0400 (0:00:00.082) 0:01:19.990 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Monday 25 June 2018 06:05:58 -0400 (0:00:00.121) 0:01:20.112 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:98", "Monday 25 June 2018 06:05:58 -0400 (0:00:00.097) 0:01:20.210 *********** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20180625060558Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mgrs] ********************************************************************", "", "TASK [set ceph manager install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:110", "Monday 25 June 2018 06:05:58 -0400 (0:00:00.138) 0:01:20.348 *********** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20180625060558Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Monday 25 June 2018 06:05:59 -0400 (0:00:00.077) 0:01:20.426 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.030411\", \"end\": \"2018-06-25 10:05:59.682239\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:05:59.651828\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"cb29ae4ab48a\", \"stdout_lines\": [\"cb29ae4ab48a\"]}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Monday 25 June 2018 06:05:59 -0400 (0:00:00.554) 0:01:20.980 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Monday 25 June 2018 06:05:59 -0400 (0:00:00.045) 0:01:21.026 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Monday 25 June 2018 06:05:59 -0400 (0:00:00.046) 0:01:21.073 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Monday 25 June 2018 06:05:59 -0400 (0:00:00.043) 0:01:21.116 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.033465\", \"end\": \"2018-06-25 10:06:00.369884\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:00.336419\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.551) 0:01:21.668 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.045) 0:01:21.714 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:21.757 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.048) 0:01:21.805 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.041) 0:01:21.846 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.042) 0:01:21.889 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.040) 0:01:21.929 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.041) 0:01:21.971 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:22.015 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:22.059 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:22.103 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.042) 0:01:22.145 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.040) 0:01:22.186 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.042) 0:01:22.228 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.050) 0:01:22.278 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.046) 0:01:22.325 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Monday 25 June 2018 06:06:00 -0400 (0:00:00.043) 0:01:22.368 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Monday 25 June 2018 06:06:01 -0400 (0:00:00.041) 0:01:22.409 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Monday 25 June 2018 06:06:01 -0400 (0:00:00.041) 0:01:22.451 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Monday 25 June 2018 06:06:01 -0400 (0:00:00.045) 0:01:22.497 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Monday 25 June 2018 06:06:01 -0400 (0:00:00.047) 0:01:22.545 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Monday 25 June 2018 06:06:01 -0400 (0:00:00.044) 0:01:22.589 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Monday 25 June 2018 06:06:01 -0400 (0:00:00.044) 0:01:22.633 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Monday 25 June 2018 06:06:01 -0400 (0:00:00.042) 0:01:22.676 *********** ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Monday 25 June 2018 06:06:01 -0400 (0:00:00.515) 0:01:23.192 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Monday 25 June 2018 06:06:01 -0400 (0:00:00.167) 0:01:23.360 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Monday 25 June 2018 06:06:02 -0400 (0:00:00.071) 0:01:23.431 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Monday 25 June 2018 06:06:02 -0400 (0:00:00.065) 0:01:23.497 *********** ", "ok: [controller-0 -> 192.168.24.14] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Monday 25 June 2018 06:06:02 -0400 (0:00:00.259) 0:01:23.756 *********** ", "ok: [controller-0 -> 192.168.24.14] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.363071\", \"end\": \"2018-06-25 10:06:03.448818\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:03.085747\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"78ace352-763a-11e8-9c1d-525400166144\", \"stdout_lines\": [\"78ace352-763a-11e8-9c1d-525400166144\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Monday 25 June 2018 06:06:03 -0400 (0:00:00.997) 0:01:24.753 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Monday 25 June 2018 06:06:03 -0400 (0:00:00.321) 0:01:25.075 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Monday 25 June 2018 06:06:03 -0400 (0:00:00.048) 0:01:25.124 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 50, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Monday 25 June 2018 06:06:03 -0400 (0:00:00.192) 0:01:25.316 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"78ace352-763a-11e8-9c1d-525400166144\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Monday 25 June 2018 06:06:03 -0400 (0:00:00.072) 0:01:25.389 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.078) 0:01:25.468 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.046) 0:01:25.514 *********** ", "changed: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 78ace352-763a-11e8-9c1d-525400166144 | tee /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.004942\", \"end\": \"2018-06-25 06:06:04.273707\", \"rc\": 0, \"start\": \"2018-06-25 06:06:04.268765\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"78ace352-763a-11e8-9c1d-525400166144\", \"stdout_lines\": [\"78ace352-763a-11e8-9c1d-525400166144\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.197) 0:01:25.711 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.042) 0:01:25.753 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.039) 0:01:25.793 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.082) 0:01:25.875 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:25.921 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.042) 0:01:25.964 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:26.009 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:26.054 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.048) 0:01:26.103 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.044) 0:01:26.147 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:26.192 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.044) 0:01:26.236 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.045) 0:01:26.282 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.042) 0:01:26.324 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Monday 25 June 2018 06:06:04 -0400 (0:00:00.055) 0:01:26.380 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Monday 25 June 2018 06:06:05 -0400 (0:00:00.072) 0:01:26.453 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Monday 25 June 2018 06:06:05 -0400 (0:00:00.066) 0:01:26.519 *********** ", "ok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Monday 25 June 2018 06:06:10 -0400 (0:00:05.758) 0:01:32.278 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Monday 25 June 2018 06:06:10 -0400 (0:00:00.043) 0:01:32.322 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Monday 25 June 2018 06:06:10 -0400 (0:00:00.050) 0:01:32.372 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Monday 25 June 2018 06:06:11 -0400 (0:00:00.042) 0:01:32.414 *********** ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Monday 25 June 2018 06:06:12 -0400 (0:00:00.997) 0:01:33.412 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Monday 25 June 2018 06:06:12 -0400 (0:00:00.067) 0:01:33.479 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Monday 25 June 2018 06:06:12 -0400 (0:00:00.038) 0:01:33.518 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.027786\", \"end\": \"2018-06-25 10:06:12.772727\", \"rc\": 0, \"start\": \"2018-06-25 10:06:12.744941\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Monday 25 June 2018 06:06:12 -0400 (0:00:00.552) 0:01:34.071 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Monday 25 June 2018 06:06:12 -0400 (0:00:00.078) 0:01:34.149 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.030387\", \"end\": \"2018-06-25 10:06:13.407756\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:13.377369\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"cb29ae4ab48a\", \"stdout_lines\": [\"cb29ae4ab48a\"]}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.553) 0:01:34.703 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.053) 0:01:34.757 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.059) 0:01:34.816 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.057) 0:01:34.874 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.060) 0:01:34.934 *********** ", "skipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.106) 0:01:35.041 *********** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.111) 0:01:35.152 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.045) 0:01:35.198 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.048) 0:01:35.246 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.054) 0:01:35.301 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Monday 25 June 2018 06:06:13 -0400 (0:00:00.049) 0:01:35.351 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Monday 25 June 2018 06:06:14 -0400 (0:00:00.052) 0:01:35.403 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Monday 25 June 2018 06:06:14 -0400 (0:00:00.044) 0:01:35.448 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Monday 25 June 2018 06:06:14 -0400 (0:00:00.049) 0:01:35.498 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Monday 25 June 2018 06:06:14 -0400 (0:00:00.049) 0:01:35.547 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"cb29ae4ab48a\"], \"delta\": \"0:00:00.031135\", \"end\": \"2018-06-25 10:06:14.921687\", \"rc\": 0, \"start\": \"2018-06-25 10:06:14.890552\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034\\\",\\n \\\"Created\\\": \\\"2018-06-25T10:05:40.357427181Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 26141,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-06-25T10:05:40.580604569Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\",\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 1073741824,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 2147483648,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87-init/diff:/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff:/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"volume\\\",\\n \\\"Name\\\": \\\"8f3068437bdbae66693ee6dd4595d60e9eaff3d82d71ee0406b0e2dcd0a45c20\\\",\\n \\\"Source\\\": \\\"/var/lib/docker/volumes/8f3068437bdbae66693ee6dd4595d60e9eaff3d82d71ee0406b0e2dcd0a45c20/_data\\\",\\n \\\"Destination\\\": \\\"/etc/ganesha\\\",\\n \\\"Driver\\\": \\\"local\\\",\\n \\\"Mode\\\": \\\"\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.10\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=78ace352-763a-11e8-9c1d-525400166144\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-6\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"e052bee5d9af655352795529880d27cb4f393d81e46a07ca5c9dc883cb29c9c4\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"c9c6a3bb3898616d34a69d242692ca582d49c06f21e38e564a1a1599d7e4f817\\\",\\n \\\"EndpointID\\\": \\\"d91011ef02755b329a8a875f6807706c8374765c46706ed5043bb7dd08eab78d\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034\\\",\", \" \\\"Created\\\": \\\"2018-06-25T10:05:40.357427181Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 26141,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-06-25T10:05:40.580604569Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/cb29ae4ab48a657c8c381eaccfebf96cdebd1b8eedd2ab415185f68dc2b8a034/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\",\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 1073741824,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 2147483648,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87-init/diff:/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff:/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/72e8dec3b850511560ab35c5ce6e4273d355c4033e428929efa3c4a61bf32e87/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"volume\\\",\", \" \\\"Name\\\": \\\"8f3068437bdbae66693ee6dd4595d60e9eaff3d82d71ee0406b0e2dcd0a45c20\\\",\", \" \\\"Source\\\": \\\"/var/lib/docker/volumes/8f3068437bdbae66693ee6dd4595d60e9eaff3d82d71ee0406b0e2dcd0a45c20/_data\\\",\", \" \\\"Destination\\\": \\\"/etc/ganesha\\\",\", \" \\\"Driver\\\": \\\"local\\\",\", \" \\\"Mode\\\": \\\"\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.10\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=78ace352-763a-11e8-9c1d-525400166144\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-6\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"e052bee5d9af655352795529880d27cb4f393d81e46a07ca5c9dc883cb29c9c4\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"c9c6a3bb3898616d34a69d242692ca582d49c06f21e38e564a1a1599d7e4f817\\\",\", \" \\\"EndpointID\\\": \\\"d91011ef02755b329a8a875f6807706c8374765c46706ed5043bb7dd08eab78d\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Monday 25 June 2018 06:06:14 -0400 (0:00:00.692) 0:01:36.240 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Monday 25 June 2018 06:06:14 -0400 (0:00:00.045) 0:01:36.285 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Monday 25 June 2018 06:06:14 -0400 (0:00:00.050) 0:01:36.336 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Monday 25 June 2018 06:06:14 -0400 (0:00:00.044) 0:01:36.381 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Monday 25 June 2018 06:06:15 -0400 (0:00:00.049) 0:01:36.431 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Monday 25 June 2018 06:06:15 -0400 (0:00:00.042) 0:01:36.473 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Monday 25 June 2018 06:06:15 -0400 (0:00:00.042) 0:01:36.516 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\"], \"delta\": \"0:00:00.032827\", \"end\": \"2018-06-25 10:06:15.896393\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:15.863566\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Monday 25 June 2018 06:06:15 -0400 (0:00:00.684) 0:01:37.200 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Monday 25 June 2018 06:06:15 -0400 (0:00:00.047) 0:01:37.248 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Monday 25 June 2018 06:06:15 -0400 (0:00:00.045) 0:01:37.293 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Monday 25 June 2018 06:06:15 -0400 (0:00:00.041) 0:01:37.335 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.128) 0:01:37.463 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.044) 0:01:37.508 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.042) 0:01:37.551 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.076) 0:01:37.627 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.046) 0:01:37.674 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.043) 0:01:37.717 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.052) 0:01:37.769 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.048) 0:01:37.817 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.044) 0:01:37.862 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Monday 25 June 2018 06:06:16 -0400 (0:00:00.043) 0:01:37.906 *********** ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.038939\", \"end\": \"2018-06-25 10:06:17.182628\", \"rc\": 0, \"start\": \"2018-06-25 10:06:17.143689\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Monday 25 June 2018 06:06:17 -0400 (0:00:00.582) 0:01:38.489 *********** ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.031376\", \"end\": \"2018-06-25 10:06:17.748448\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:17.717072\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/2e4510fb398c1ae72535c5c3f1f0f1546729fe945cd85f87dd450c522e8905ab/diff:/var/lib/docker/overlay2/ba0a06d1080745666a14fd468c920651d33a74f62e3c7d02ed110dfc641fac15/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/daf21be57606d838c4bf1de809dba8faf7ee281cbde06af40abd777bfa329d33/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Monday 25 June 2018 06:06:17 -0400 (0:00:00.570) 0:01:39.059 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Monday 25 June 2018 06:06:17 -0400 (0:00:00.074) 0:01:39.133 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Monday 25 June 2018 06:06:17 -0400 (0:00:00.047) 0:01:39.181 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Monday 25 June 2018 06:06:17 -0400 (0:00:00.050) 0:01:39.232 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Monday 25 June 2018 06:06:17 -0400 (0:00:00.044) 0:01:39.277 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Monday 25 June 2018 06:06:17 -0400 (0:00:00.045) 0:01:39.322 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Monday 25 June 2018 06:06:17 -0400 (0:00:00.050) 0:01:39.372 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Monday 25 June 2018 06:06:18 -0400 (0:00:00.048) 0:01:39.421 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Monday 25 June 2018 06:06:18 -0400 (0:00:00.056) 0:01:39.478 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Monday 25 June 2018 06:06:18 -0400 (0:00:00.052) 0:01:39.530 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Monday 25 June 2018 06:06:18 -0400 (0:00:00.051) 0:01:39.581 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Monday 25 June 2018 06:06:18 -0400 (0:00:00.045) 0:01:39.627 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Monday 25 June 2018 06:06:18 -0400 (0:00:00.047) 0:01:39.674 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.528437\", \"end\": \"2018-06-25 10:06:19.423155\", \"rc\": 0, \"start\": \"2018-06-25 10:06:18.894718\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Monday 25 June 2018 06:06:19 -0400 (0:00:01.046) 0:01:40.721 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Monday 25 June 2018 06:06:19 -0400 (0:00:00.079) 0:01:40.800 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Monday 25 June 2018 06:06:19 -0400 (0:00:00.052) 0:01:40.853 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Monday 25 June 2018 06:06:19 -0400 (0:00:00.048) 0:01:40.902 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Monday 25 June 2018 06:06:19 -0400 (0:00:00.081) 0:01:40.983 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Monday 25 June 2018 06:06:19 -0400 (0:00:00.049) 0:01:41.033 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Monday 25 June 2018 06:06:19 -0400 (0:00:00.049) 0:01:41.082 *********** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Monday 25 June 2018 06:06:22 -0400 (0:00:02.315) 0:01:43.398 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Monday 25 June 2018 06:06:22 -0400 (0:00:00.048) 0:01:43.447 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Monday 25 June 2018 06:06:22 -0400 (0:00:00.051) 0:01:43.498 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Monday 25 June 2018 06:06:22 -0400 (0:00:00.195) 0:01:43.693 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Monday 25 June 2018 06:06:22 -0400 (0:00:00.052) 0:01:43.746 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Monday 25 June 2018 06:06:22 -0400 (0:00:00.046) 0:01:43.792 *********** ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Monday 25 June 2018 06:06:22 -0400 (0:00:00.537) 0:01:44.329 *********** ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"677880bddaa262c511eb635c230f19e6a4ddfabe\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"13cb0c834a94e4916365ae02ba1fbe9e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 761, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921182.99-109746653871474/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Monday 25 June 2018 06:06:24 -0400 (0:00:01.848) 0:01:46.178 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2", "Monday 25 June 2018 06:06:24 -0400 (0:00:00.049) 0:01:46.227 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mgr : create mgr directory] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2", "Monday 25 June 2018 06:06:25 -0400 (0:00:00.192) 0:01:46.420 *********** ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10", "Monday 25 June 2018 06:06:25 -0400 (0:00:00.539) 0:01:46.960 *********** ", "changed: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"46173b1f477ccec40e6961621fd8c750\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921185.72-95684110235710/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set mgr key permissions] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24", "Monday 25 June 2018 06:06:28 -0400 (0:00:02.802) 0:01:49.763 *********** ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}", "", "TASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2", "Monday 25 June 2018 06:06:28 -0400 (0:00:00.629) 0:01:50.392 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : install ceph mgr for debian] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9", "Monday 25 June 2018 06:06:29 -0400 (0:00:00.046) 0:01:50.438 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17", "Monday 25 June 2018 06:06:29 -0400 (0:00:00.044) 0:01:50.483 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25", "Monday 25 June 2018 06:06:29 -0400 (0:00:00.052) 0:01:50.535 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : start and add that the mgr service to the init sequence] ******", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35", "Monday 25 June 2018 06:06:29 -0400 (0:00:00.046) 0:01:50.581 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2", "Monday 25 June 2018 06:06:29 -0400 (0:00:00.047) 0:01:50.629 *********** ", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"fb2f3078fffe963a7fd0473c7b908931939d5c73\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"7b527fb0a44d25cf825cb2b6fcb2b07e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 733, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921189.37-59898925764271/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mgr : systemd start mgr container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13", "Monday 25 June 2018 06:06:32 -0400 (0:00:02.982) 0:01:53.612 *********** ", "ok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmgr.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19", "Monday 25 June 2018 06:06:33 -0400 (0:00:00.788) 0:01:54.401 *********** ", "changed: [controller-0 -> 192.168.24.14] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.340218\", \"end\": \"2018-06-25 10:06:34.009653\", \"rc\": 0, \"start\": \"2018-06-25 10:06:33.669435\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[]}\"]}", "", "TASK [ceph-mgr : set _ceph_mgr_modules fact] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26", "Monday 25 June 2018 06:06:33 -0400 (0:00:00.905) 0:01:55.306 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [], \"enabled_modules\": [\"restful\", \"status\"]}}, \"changed\": false}", "", "TASK [ceph-mgr : disable ceph mgr enabled modules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:30", "Monday 25 June 2018 06:06:34 -0400 (0:00:00.104) 0:01:55.411 *********** ", "changed: [controller-0 -> 192.168.24.14] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:01.367127\", \"end\": \"2018-06-25 10:06:36.007945\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-06-25 10:06:34.640818\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add modules to ceph-mgr] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:41", "Monday 25 June 2018 06:06:35 -0400 (0:00:01.938) 0:01:57.349 *********** ", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Monday 25 June 2018 06:06:35 -0400 (0:00:00.030) 0:01:57.380 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Monday 25 June 2018 06:06:36 -0400 (0:00:00.073) 0:01:57.453 *********** ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"f36b3460f6762a853a3dab1958afb7d83ff8f234\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 570, \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Monday 25 June 2018 06:06:38 -0400 (0:00:01.990) 0:01:59.443 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Monday 25 June 2018 06:06:38 -0400 (0:00:00.082) 0:01:59.526 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Monday 25 June 2018 06:06:38 -0400 (0:00:00.119) 0:01:59.646 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph manager install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:129", "Monday 25 June 2018 06:06:38 -0400 (0:00:00.092) 0:01:59.738 *********** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20180625060638Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY [osds] ********************************************************************", "", "TASK [set ceph osd install 'In Progress'] **************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:141", "Monday 25 June 2018 06:06:38 -0400 (0:00:00.143) 0:01:59.882 *********** ", "ok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20180625060638Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Monday 25 June 2018 06:06:38 -0400 (0:00:00.080) 0:01:59.962 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Monday 25 June 2018 06:06:38 -0400 (0:00:00.040) 0:02:00.002 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.026145\", \"end\": \"2018-06-25 10:06:39.219526\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:39.193381\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.502) 0:02:00.505 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.043) 0:02:00.549 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.040) 0:02:00.590 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.045) 0:02:00.635 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:00.674 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:00.712 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.039) 0:02:00.751 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.036) 0:02:00.788 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.037) 0:02:00.826 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.039) 0:02:00.865 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.041) 0:02:00.906 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:00.945 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.039) 0:02:00.984 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:01.023 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.039) 0:02:01.062 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.038) 0:02:01.101 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.045) 0:02:01.146 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.144) 0:02:01.290 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.041) 0:02:01.332 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Monday 25 June 2018 06:06:39 -0400 (0:00:00.040) 0:02:01.373 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.040) 0:02:01.413 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.039) 0:02:01.453 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.038) 0:02:01.491 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.046) 0:02:01.537 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.038) 0:02:01.576 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.040) 0:02:01.617 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.040) 0:02:01.658 *********** ", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.478) 0:02:02.136 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.067) 0:02:02.203 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.065) 0:02:02.269 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Monday 25 June 2018 06:06:40 -0400 (0:00:00.063) 0:02:02.332 *********** ", "ok: [ceph-0 -> 192.168.24.14] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Monday 25 June 2018 06:06:41 -0400 (0:00:00.132) 0:02:02.464 *********** ", "ok: [ceph-0 -> 192.168.24.14] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.366077\", \"end\": \"2018-06-25 10:06:42.095079\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:41.729002\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"78ace352-763a-11e8-9c1d-525400166144\", \"stdout_lines\": [\"78ace352-763a-11e8-9c1d-525400166144\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.932) 0:02:03.397 *********** ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.182) 0:02:03.579 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.046) 0:02:03.626 *********** ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.189) 0:02:03.816 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"78ace352-763a-11e8-9c1d-525400166144\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.067) 0:02:03.883 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.069) 0:02:03.952 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.040) 0:02:03.993 *********** ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 78ace352-763a-11e8-9c1d-525400166144 | tee /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.186) 0:02:04.180 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.046) 0:02:04.226 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.037) 0:02:04.264 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.064) 0:02:04.329 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Monday 25 June 2018 06:06:42 -0400 (0:00:00.037) 0:02:04.366 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.066) 0:02:04.433 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.069) 0:02:04.502 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.069) 0:02:04.571 *********** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002517\", \"end\": \"2018-06-25 10:06:43.759815\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-06-25 10:06:43.757298\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.476) 0:02:05.048 *********** ", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-06-25 10:06:43.759815', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.002517', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-06-25 10:06:43.757298', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002517\", \"end\": \"2018-06-25 10:06:43.759815\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-06-25 10:06:43.757298\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.081) 0:02:05.130 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\"]}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.072) 0:02:05.203 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.039) 0:02:05.243 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.041) 0:02:05.285 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.042) 0:02:05.327 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Monday 25 June 2018 06:06:43 -0400 (0:00:00.043) 0:02:05.371 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Monday 25 June 2018 06:06:44 -0400 (0:00:00.065) 0:02:05.436 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Monday 25 June 2018 06:06:44 -0400 (0:00:00.065) 0:02:05.501 *********** ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Monday 25 June 2018 06:06:49 -0400 (0:00:04.990) 0:02:10.492 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Monday 25 June 2018 06:06:49 -0400 (0:00:00.044) 0:02:10.536 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Monday 25 June 2018 06:06:49 -0400 (0:00:00.044) 0:02:10.581 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Monday 25 June 2018 06:06:49 -0400 (0:00:00.043) 0:02:10.625 *********** ", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Monday 25 June 2018 06:06:50 -0400 (0:00:00.906) 0:02:11.531 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Monday 25 June 2018 06:06:50 -0400 (0:00:00.167) 0:02:11.699 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Monday 25 June 2018 06:06:50 -0400 (0:00:00.039) 0:02:11.738 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.023208\", \"end\": \"2018-06-25 10:06:51.048492\", \"rc\": 0, \"start\": \"2018-06-25 10:06:51.025284\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Monday 25 June 2018 06:06:50 -0400 (0:00:00.600) 0:02:12.338 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Monday 25 June 2018 06:06:51 -0400 (0:00:00.063) 0:02:12.401 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.026892\", \"end\": \"2018-06-25 10:06:51.617577\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:06:51.590685\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Monday 25 June 2018 06:06:51 -0400 (0:00:00.500) 0:02:12.902 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Monday 25 June 2018 06:06:51 -0400 (0:00:00.171) 0:02:13.074 *********** ", "ok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Monday 25 June 2018 06:06:51 -0400 (0:00:00.119) 0:02:13.194 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Monday 25 June 2018 06:06:51 -0400 (0:00:00.077) 0:02:13.272 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Monday 25 June 2018 06:06:51 -0400 (0:00:00.084) 0:02:13.356 *********** ", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1529921145.491, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"87f8e20ff9c54bcb76bf97228cb0ba705b439784\", \"ctime\": 1529921145.49, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756168, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921145.49, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1529921145.947, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"ae4c70255ca42eb77eacd1cf1db0492ada8c18ae\", \"ctime\": 1529921145.947, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756169, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921145.947, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1529921146.43, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"ctime\": 1529921146.43, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 223936, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921146.43, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1529921146.928, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"381a02ebfa1216a2a279ae665eeaebd1ce6de5f5\", \"ctime\": 1529921146.928, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 7030010, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921146.928, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1529921147.406, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"3540de06c3ed498809bdddd6a350cae592455923\", \"ctime\": 1529921147.406, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 10981164, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921147.406, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1529921147.902, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"c3545cb2f74ad0b3c3491481b9215a04221dc20f\", \"ctime\": 1529921147.902, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 13656890, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921147.902, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1529921186.373, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"ctime\": 1529921150.129, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756170, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921150.129, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Monday 25 June 2018 06:06:53 -0400 (0:00:01.232) 0:02:14.589 *********** ", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921145.49, u'block_size': 4096, u'inode': 105756168, u'isgid': False, u'size': 159, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1529921145.491, u'isdir': False, u'ctime': 1529921145.49, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'87f8e20ff9c54bcb76bf97228cb0ba705b439784', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1529921145.491, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"87f8e20ff9c54bcb76bf97228cb0ba705b439784\", \"ctime\": 1529921145.49, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756168, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921145.49, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921145.947, u'block_size': 4096, u'inode': 105756169, u'isgid': False, u'size': 688, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1529921145.947, u'isdir': False, u'ctime': 1529921145.947, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'ae4c70255ca42eb77eacd1cf1db0492ada8c18ae', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1529921145.947, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"ae4c70255ca42eb77eacd1cf1db0492ada8c18ae\", \"ctime\": 1529921145.947, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756169, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921145.947, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921146.43, u'block_size': 4096, u'inode': 223936, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1529921146.43, u'isdir': False, u'ctime': 1529921146.43, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'502b9fd25b9d73522bc5c0029ec362bd3ef148be', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1529921146.43, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"ctime\": 1529921146.43, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 223936, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921146.43, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921146.928, u'block_size': 4096, u'inode': 7030010, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1529921146.928, u'isdir': False, u'ctime': 1529921146.928, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'381a02ebfa1216a2a279ae665eeaebd1ce6de5f5', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1529921146.928, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"381a02ebfa1216a2a279ae665eeaebd1ce6de5f5\", \"ctime\": 1529921146.928, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 7030010, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921146.928, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921147.406, u'block_size': 4096, u'inode': 10981164, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1529921147.406, u'isdir': False, u'ctime': 1529921147.406, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'3540de06c3ed498809bdddd6a350cae592455923', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1529921147.406, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"3540de06c3ed498809bdddd6a350cae592455923\", \"ctime\": 1529921147.406, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 10981164, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921147.406, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921147.902, u'block_size': 4096, u'inode': 13656890, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1529921147.902, u'isdir': False, u'ctime': 1529921147.902, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'c3545cb2f74ad0b3c3491481b9215a04221dc20f', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1529921147.902, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"c3545cb2f74ad0b3c3491481b9215a04221dc20f\", \"ctime\": 1529921147.902, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 13656890, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921147.902, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529921150.129, u'block_size': 4096, u'inode': 105756170, u'isgid': False, u'size': 67, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1529921186.373, u'isdir': False, u'ctime': 1529921150.129, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'dce8b853b5430d214621f9e0ba7d2feebbb2a1a5', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1529921186.373, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"dce8b853b5430d214621f9e0ba7d2feebbb2a1a5\", \"ctime\": 1529921150.129, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 105756170, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529921150.129, \"nlink\": 1, \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.271) 0:02:14.861 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.039) 0:02:14.900 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.037) 0:02:14.938 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.043) 0:02:14.981 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.042) 0:02:15.023 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.042) 0:02:15.065 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.038) 0:02:15.104 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.038) 0:02:15.143 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.045) 0:02:15.189 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.041) 0:02:15.230 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.047) 0:02:15.277 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.041) 0:02:15.319 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Monday 25 June 2018 06:06:53 -0400 (0:00:00.042) 0:02:15.362 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.041) 0:02:15.404 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.049) 0:02:15.454 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.141) 0:02:15.595 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.040) 0:02:15.635 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.043) 0:02:15.679 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.039) 0:02:15.719 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.038) 0:02:15.757 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.037) 0:02:15.795 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.046) 0:02:15.841 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.036) 0:02:15.878 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.038) 0:02:15.916 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.041) 0:02:15.958 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.038) 0:02:15.996 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.038) 0:02:16.034 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.045) 0:02:16.080 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.037) 0:02:16.118 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Monday 25 June 2018 06:06:54 -0400 (0:00:00.041) 0:02:16.159 *********** ", "ok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:17.395069\", \"end\": \"2018-06-25 10:07:12.744333\", \"rc\": 0, \"start\": \"2018-06-25 10:06:55.349264\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Monday 25 June 2018 06:07:12 -0400 (0:00:17.877) 0:02:34.036 *********** ", "changed: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.025287\", \"end\": \"2018-06-25 10:07:13.241136\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:07:13.215849\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/adf529a68f129c324f6caf826daa5f1bce018463f36dfe2327784845fb5bcf1d/diff:/var/lib/docker/overlay2/17a450161a816794364817ac5f8af8e22dd8241580c1df1e76d45fa5ddd83ad5/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/adf529a68f129c324f6caf826daa5f1bce018463f36dfe2327784845fb5bcf1d/diff:/var/lib/docker/overlay2/17a450161a816794364817ac5f8af8e22dd8241580c1df1e76d45fa5ddd83ad5/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/690fbc9b48c94d96067ff34224e3e1dd07727583cc96875ee18905d9d5fdf05a/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.502) 0:02:34.539 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.073) 0:02:34.612 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.042) 0:02:34.654 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.047) 0:02:34.702 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.042) 0:02:34.745 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.040) 0:02:34.785 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.040) 0:02:34.826 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.040) 0:02:34.866 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.043) 0:02:34.910 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.039) 0:02:34.949 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.040) 0:02:34.990 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.047) 0:02:35.038 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Monday 25 June 2018 06:07:13 -0400 (0:00:00.041) 0:02:35.079 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.507126\", \"end\": \"2018-06-25 10:07:14.772392\", \"rc\": 0, \"start\": \"2018-06-25 10:07:14.265266\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Monday 25 June 2018 06:07:14 -0400 (0:00:00.991) 0:02:36.070 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Monday 25 June 2018 06:07:14 -0400 (0:00:00.068) 0:02:36.139 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Monday 25 June 2018 06:07:14 -0400 (0:00:00.047) 0:02:36.186 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Monday 25 June 2018 06:07:14 -0400 (0:00:00.045) 0:02:36.232 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Monday 25 June 2018 06:07:14 -0400 (0:00:00.077) 0:02:36.309 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Monday 25 June 2018 06:07:14 -0400 (0:00:00.043) 0:02:36.352 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Monday 25 June 2018 06:07:15 -0400 (0:00:00.040) 0:02:36.393 *********** ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Monday 25 June 2018 06:07:17 -0400 (0:00:02.219) 0:02:38.612 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Monday 25 June 2018 06:07:17 -0400 (0:00:00.047) 0:02:38.660 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Monday 25 June 2018 06:07:17 -0400 (0:00:00.045) 0:02:38.705 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Monday 25 June 2018 06:07:17 -0400 (0:00:00.053) 0:02:38.758 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Monday 25 June 2018 06:07:17 -0400 (0:00:00.043) 0:02:38.802 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Monday 25 June 2018 06:07:17 -0400 (0:00:00.039) 0:02:38.842 *********** ", "changed: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Monday 25 June 2018 06:07:17 -0400 (0:00:00.484) 0:02:39.327 *********** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"4bdff1e64c4372595a71f3d358e1307a2bca8746\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"6419252280b3f08dc5a58f4743435fb1\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 871, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921237.98-252098485242928/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Monday 25 June 2018 06:07:20 -0400 (0:00:03.026) 0:02:42.353 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure public_network configured] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.041) 0:02:42.394 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure cluster_network configured] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.044) 0:02:42.438 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure journal_size configured] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.045) 0:02:42.484 *********** ", "ok: [ceph-0] => {", " \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"", "}", "", "TASK [ceph-osd : make sure an osd scenario was chosen] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.074) 0:02:42.558 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure a valid osd scenario was chosen] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.046) 0:02:42.604 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify devices have been provided] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.044) 0:02:42.649 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.051) 0:02:42.701 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify lvm_volumes have been provided] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.044) 0:02:42.746 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.053) 0:02:42.799 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the devices variable is a list] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.048) 0:02:42.848 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify dedicated devices have been provided] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.049) 0:02:42.897 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.048) 0:02:42.945 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.046) 0:02:42.992 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include system_tuning.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.045) 0:02:43.037 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0", "", "TASK [ceph-osd : disable osd directory parsing by updatedb] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.074) 0:02:43.112 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : disable osd directory path in updatedb.conf] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.039) 0:02:43.151 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : create tmpfiles.d directory] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22", "Monday 25 June 2018 06:07:21 -0400 (0:00:00.040) 0:02:43.192 *********** ", "ok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}", "", "TASK [ceph-osd : disable transparent hugepage] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33", "Monday 25 June 2018 06:07:22 -0400 (0:00:00.466) 0:02:43.658 *********** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921242.31-245170607652251/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : get default vm.min_free_kbytes] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45", "Monday 25 June 2018 06:07:24 -0400 (0:00:02.289) 0:02:45.948 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.003830\", \"end\": \"2018-06-25 10:07:25.128957\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:07:25.125127\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}", "", "TASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52", "Monday 25 June 2018 06:07:25 -0400 (0:00:00.468) 0:02:46.416 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}", "", "TASK [ceph-osd : apply operating system tuning] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56", "Monday 25 June 2018 06:07:25 -0400 (0:00:00.061) 0:02:46.478 *********** ", "changed: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}", "changed: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}", "changed: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}", "changed: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}", "changed: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}", "", "TASK [ceph-osd : install dependencies] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10", "Monday 25 June 2018 06:07:27 -0400 (0:00:02.411) 0:02:48.890 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include common.yml] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18", "Monday 25 June 2018 06:07:27 -0400 (0:00:00.044) 0:02:48.934 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0", "", "TASK [ceph-osd : create bootstrap-osd and osd directories] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2", "Monday 25 June 2018 06:07:27 -0400 (0:00:00.068) 0:02:49.003 *********** ", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-osd : copy ceph key(s) if needed] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15", "Monday 25 June 2018 06:07:28 -0400 (0:00:00.932) 0:02:49.936 *********** ", "changed: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"502b9fd25b9d73522bc5c0029ec362bd3ef148be\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"2f594fd27d9a2938d207fd0e4dcd1fdb\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921248.59-234204469734268/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2", "Monday 25 June 2018 06:07:30 -0400 (0:00:02.391) 0:02:52.327 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11", "Monday 25 June 2018 06:07:30 -0400 (0:00:00.038) 0:02:52.366 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.051) 0:02:52.417 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.050) 0:02:52.467 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.045) 0:02:52.512 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.040) 0:02:52.553 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.046) 0:02:52.600 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.036) 0:02:52.636 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.067) 0:02:52.703 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.042) 0:02:52.745 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.037) 0:02:52.782 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.044) 0:02:52.827 *********** ", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sectors': u'41943040', u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-06-25-05-49-20-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-06-25-05-49-20-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'fca00eb7-6dba-4ea0-b1e5-202b819f2b85', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'fca00eb7-6dba-4ea0-b1e5-202b819f2b85']}, u'sectors': u'41938911', u'start': u'4096', u'holders': [], u'size': u'20.00 GB'}}, u'holders': [], u'size': u'20.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-06-25-05-49-20-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-06-25-05-49-20-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"fca00eb7-6dba-4ea0-b1e5-202b819f2b85\"]}, \"sectors\": \"41938911\", \"sectorsize\": 512, \"size\": \"20.00 GB\", \"start\": \"4096\", \"uuid\": \"fca00eb7-6dba-4ea0-b1e5-202b819f2b85\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"41943040\", \"sectorsize\": \"512\", \"size\": \"20.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sectors': u'83886080', u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'40.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"83886080\", \"sectorsize\": \"512\", \"size\": \"40.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : resolve dedicated device link(s)] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.050) 0:02:52.877 *********** ", "", "TASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.039) 0:02:52.917 *********** ", "", "TASK [ceph-osd : set_fact build final dedicated_devices list] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.038) 0:02:52.955 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : read information about the devices] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29", "Monday 25 June 2018 06:07:31 -0400 (0:00:00.039) 0:02:52.995 *********** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "", "TASK [ceph-osd : check the partition status of the osd disks] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2", "Monday 25 June 2018 06:07:32 -0400 (0:00:00.693) 0:02:53.688 *********** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007642\", \"end\": \"2018-06-25 10:07:33.004123\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:32.996481\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create gpt disk label] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11", "Monday 25 June 2018 06:07:32 -0400 (0:00:00.602) 0:02:54.291 *********** ", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-06-25 10:07:33.004123', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'start': u'2018-06-25 10:07:32.996481', u'delta': u'0:00:00.007642', 'item': u'/dev/vdb', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u'', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.013028\", \"end\": \"2018-06-25 10:07:33.606261\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007642\", \"end\": \"2018-06-25 10:07:33.004123\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:32.996481\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-06-25 10:07:33.593233\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : include scenarios/collocated.yml] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41", "Monday 25 June 2018 06:07:33 -0400 (0:00:00.607) 0:02:54.899 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0", "", "TASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5", "Monday 25 June 2018 06:07:33 -0400 (0:00:00.082) 0:02:54.981 *********** ", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u\"unit 'MiB' print\", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 40960.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-6\", \"delta\": \"0:00:06.868230\", \"end\": \"2018-06-25 10:07:41.149753\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-06-25 10:07:34.281523\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-06-25 10:07:34'\\n+common_functions.sh:13: log(): echo '2018-06-25 10:07:34 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 77cb590c-de4c-4507-b665-fd28566a15bc /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:77cb590c-de4c-4507-b665-fd28566a15bc --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:0f2ca894-7390-4044-aedf-d8eeb9dcbdd0 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.etxf1G with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.etxf1G\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.etxf1G\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.etxf1G\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/ceph_fsid.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/ceph_fsid.23091.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/fsid.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/fsid.23091.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/magic.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/magic.23091.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/journal_uuid.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/journal_uuid.23091.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.etxf1G/journal -> /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/type.23091.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/type.23091.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.etxf1G\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.etxf1G\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-06-25 10:07:34'\", \"+common_functions.sh:13: log(): echo '2018-06-25 10:07:34 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 77cb590c-de4c-4507-b665-fd28566a15bc /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:77cb590c-de4c-4507-b665-fd28566a15bc --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:0f2ca894-7390-4044-aedf-d8eeb9dcbdd0 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.etxf1G with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.etxf1G\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.etxf1G\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.etxf1G\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/ceph_fsid.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/ceph_fsid.23091.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/fsid.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/fsid.23091.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/magic.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/magic.23091.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/journal_uuid.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/journal_uuid.23091.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.etxf1G/journal -> /dev/disk/by-partuuid/77cb590c-de4c-4507-b665-fd28566a15bc\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G/type.23091.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G/type.23091.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.etxf1G\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.etxf1G\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.etxf1G\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.etxf1G\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-06-25 10:07:34 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-06-25 10:07:34 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-06-25 10:07:34 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-06-25 10:07:34 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.9L24Q2a7qz' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-06-25 10:07:34 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=2588607 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=10354427, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=5055, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-06-25 10:07:34 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-06-25 10:07:34 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-06-25 10:07:34 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-06-25 10:07:34 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.9L24Q2a7qz' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-06-25 10:07:34 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=2588607 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=10354427, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=5055, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}", "", "TASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30", "Monday 25 June 2018 06:07:41 -0400 (0:00:07.479) 0:03:02.461 *********** ", "skipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53", "Monday 25 June 2018 06:07:41 -0400 (0:00:00.053) 0:03:02.514 *********** ", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u\"unit 'MiB' print\", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 40960.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/non-collocated.yml] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48", "Monday 25 June 2018 06:07:41 -0400 (0:00:00.055) 0:03:02.569 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/lvm.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56", "Monday 25 June 2018 06:07:41 -0400 (0:00:00.043) 0:03:02.613 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include activate_osds.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64", "Monday 25 June 2018 06:07:41 -0400 (0:00:00.042) 0:03:02.656 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include start_osds.yml] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72", "Monday 25 June 2018 06:07:41 -0400 (0:00:00.043) 0:03:02.699 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include docker/main.yml] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80", "Monday 25 June 2018 06:07:41 -0400 (0:00:00.040) 0:03:02.740 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0", "", "TASK [ceph-osd : include start_docker_osd.yml] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2", "Monday 25 June 2018 06:07:41 -0400 (0:00:00.078) 0:03:02.818 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0", "", "TASK [ceph-osd : umount ceph disk (if on openstack)] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4", "Monday 25 June 2018 06:07:41 -0400 (0:00:00.063) 0:03:02.882 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : test if the container image has the disk_list function] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13", "Monday 25 June 2018 06:07:41 -0400 (0:00:00.039) 0:03:02.921 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-6\", \"disk_list.sh\"], \"delta\": \"0:00:00.430992\", \"end\": \"2018-06-25 10:07:42.569168\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:07:42.138176\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2bh/43d\\tInode: 33605760 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-04-18 13:02:03.000000000 +0000\\nModify: 2018-04-18 13:02:03.000000000 +0000\\nChange: 2018-06-25 10:07:00.699558531 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2bh/43d\\tInode: 33605760 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-04-18 13:02:03.000000000 +0000\", \"Modify: 2018-04-18 13:02:03.000000000 +0000\", \"Change: 2018-06-25 10:07:00.699558531 +0000\", \" Birth: -\"]}", "", "TASK [ceph-osd : generate ceph osd docker run script] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19", "Monday 25 June 2018 06:07:42 -0400 (0:00:00.937) 0:03:03.859 *********** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"6e2ae7f97fe861dbe9824133e6c912df4b7c8959\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"97ef03a63aca5a84f85a7a061ad42a61\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 1000, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921262.51-211790013762956/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:28", "Monday 25 June 2018 06:07:44 -0400 (0:00:02.274) 0:03:06.133 *********** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921264.78-219742478778726/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : systemd start osd container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:39", "Monday 25 June 2018 06:07:47 -0400 (0:00:02.434) 0:03:08.568 *********** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14904\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14904\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87", "Monday 25 June 2018 06:07:47 -0400 (0:00:00.774) 0:03:09.343 *********** ", "ok: [ceph-0] => (item={u'mon_cap': u'allow r', u'name': u'client.openstack', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r\", \"name\": \"client.openstack\", \"osd_cap\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}}", "ok: [ceph-0] => (item={u'mon_cap': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', u'osd_cap': u'allow rw'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mds_cap\": \"allow *\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"name\": \"client.manila\", \"osd_cap\": \"allow rw\"}}", "ok: [ceph-0] => (item={u'mon_cap': u'allow rw', u'name': u'client.radosgw', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', u'osd_cap': u'allow rwx'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow rw\", \"name\": \"client.radosgw\", \"osd_cap\": \"allow rwx\"}}", "", "TASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95", "Monday 25 June 2018 06:07:48 -0400 (0:00:00.115) 0:03:09.458 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"openstack_keys\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false}", "", "TASK [ceph-osd : wait for all osd to be up] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2", "Monday 25 June 2018 06:07:48 -0400 (0:00:00.106) 0:03:09.565 *********** ", "changed: [ceph-0 -> 192.168.24.14] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.685557\", \"end\": \"2018-06-25 10:07:49.589654\", \"rc\": 0, \"start\": \"2018-06-25 10:07:48.904097\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : list existing pool(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12", "Monday 25 June 2018 06:07:49 -0400 (0:00:01.356) 0:03:10.922 *********** ", "changed: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.353822\", \"end\": \"2018-06-25 10:07:50.587628\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:50.233806\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.14] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.334785\", \"end\": \"2018-06-25 10:07:51.419709\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:51.084924\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.323941\", \"end\": \"2018-06-25 10:07:52.236761\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:51.912820\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.337900\", \"end\": \"2018-06-25 10:07:53.066076\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:52.728176\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.350538\", \"end\": \"2018-06-25 10:07:53.923307\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:53.572769\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create openstack pool(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21", "Monday 25 June 2018 06:07:53 -0400 (0:00:04.339) 0:03:15.261 *********** ", "ok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'images'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'images', u'size'], u'end': u'2018-06-25 10:07:50.587628', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:50.233806', u'delta': u'0:00:00.353822', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'images'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"images\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.009998\", \"end\": \"2018-06-25 10:07:55.587323\", \"item\": [{\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.353822\", \"end\": \"2018-06-25 10:07:50.587628\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:50.233806\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:07:54.577325\", \"stderr\": \"pool 'images' created\", \"stderr_lines\": [\"pool 'images' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'metrics'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'metrics', u'size'], u'end': u'2018-06-25 10:07:51.419709', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:51.084924', u'delta': u'0:00:00.334785', 'item': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'metrics'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"metrics\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.953067\", \"end\": \"2018-06-25 10:07:57.065992\", \"item\": [{\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.334785\", \"end\": \"2018-06-25 10:07:51.419709\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:51.084924\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:07:56.112925\", \"stderr\": \"pool 'metrics' created\", \"stderr_lines\": [\"pool 'metrics' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'backups'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'backups', u'size'], u'end': u'2018-06-25 10:07:52.236761', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:51.912820', u'delta': u'0:00:00.323941', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'backups'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"backups\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.905811\", \"end\": \"2018-06-25 10:07:58.478913\", \"item\": [{\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.323941\", \"end\": \"2018-06-25 10:07:52.236761\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:51.912820\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:07:57.573102\", \"stderr\": \"pool 'backups' created\", \"stderr_lines\": [\"pool 'backups' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'vms'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'vms', u'size'], u'end': u'2018-06-25 10:07:53.066076', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:52.728176', u'delta': u'0:00:00.337900', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'vms'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"vms\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.956912\", \"end\": \"2018-06-25 10:07:59.961855\", \"item\": [{\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.337900\", \"end\": \"2018-06-25 10:07:53.066076\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:52.728176\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:07:59.004943\", \"stderr\": \"pool 'vms' created\", \"stderr_lines\": [\"pool 'vms' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.14] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'volumes'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'volumes', u'size'], u'end': u'2018-06-25 10:07:53.923307', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-25 10:07:53.572769', u'delta': u'0:00:00.350538', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'volumes'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"volumes\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.000109\", \"end\": \"2018-06-25 10:08:01.469442\", \"item\": [{\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.14\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.350538\", \"end\": \"2018-06-25 10:07:53.923307\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-25 10:07:53.572769\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-25 10:08:00.469333\", \"stderr\": \"pool 'volumes' created\", \"stderr_lines\": [\"pool 'volumes' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : assign application to pool(s)] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:41", "Monday 25 June 2018 06:08:01 -0400 (0:00:07.545) 0:03:22.807 *********** ", "ok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"images\", \"rbd\"], \"delta\": \"0:00:01.340092\", \"end\": \"2018-06-25 10:08:03.437552\", \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:02.097460\", \"stderr\": \"enabled application 'rbd' on pool 'images'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"metrics\", \"openstack_gnocchi\"], \"delta\": \"0:00:00.509473\", \"end\": \"2018-06-25 10:08:04.457169\", \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:03.947696\", \"stderr\": \"enabled application 'openstack_gnocchi' on pool 'metrics'\", \"stderr_lines\": [\"enabled application 'openstack_gnocchi' on pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"backups\", \"rbd\"], \"delta\": \"0:00:00.496703\", \"end\": \"2018-06-25 10:08:05.473265\", \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:04.976562\", \"stderr\": \"enabled application 'rbd' on pool 'backups'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"vms\", \"rbd\"], \"delta\": \"0:00:00.510497\", \"end\": \"2018-06-25 10:08:06.491356\", \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:05.980859\", \"stderr\": \"enabled application 'rbd' on pool 'vms'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.14] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"volumes\", \"rbd\"], \"delta\": \"0:00:00.488328\", \"end\": \"2018-06-25 10:08:07.469816\", \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:06.981488\", \"stderr\": \"enabled application 'rbd' on pool 'volumes'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create openstack cephx key(s)] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:50", "Monday 25 June 2018 06:08:07 -0400 (0:00:05.985) 0:03:28.792 *********** ", "changed: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mon': u'allow r', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.814098\", \"end\": \"2018-06-25 10:08:09.116122\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:08.302024\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', 'mgr': u'allow *'}, 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'mode': u'0600'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.manila.keyring\"], \"delta\": \"0:00:00.803924\", \"end\": \"2018-06-25 10:08:10.438462\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:09.634538\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'', 'osd': u'allow rwx', 'mon': u'allow rw', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.862252\", \"end\": \"2018-06-25 10:08:11.806074\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-06-25 10:08:10.943822\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : fetch openstack cephx key(s)] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:63", "Monday 25 June 2018 06:08:11 -0400 (0:00:04.338) 0:03:33.130 *********** ", "changed: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mon': u'allow r', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'name': u'client.openstack'}) => {\"changed\": true, \"checksum\": \"56011607b6d88d1e1f856a2666ca00634ee8af81\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.client.openstack.keyring\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"md5sum\": \"6701009b87ea7660b3369abb8fdc0536\", \"remote_checksum\": \"56011607b6d88d1e1f856a2666ca00634ee8af81\", \"remote_md5sum\": null}", "changed: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', 'mgr': u'allow *'}, 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'mode': u'0600'}) => {\"changed\": true, \"checksum\": \"c017bc60396016c3f00762471b81bf9c6cd4b443\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.client.manila.keyring\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"md5sum\": \"d2b0ce76144746e6b7bff711e577e4ac\", \"remote_checksum\": \"c017bc60396016c3f00762471b81bf9c6cd4b443\", \"remote_md5sum\": null}", "changed: [ceph-0 -> 192.168.24.14] => (item={'caps': {'mds': u'', 'osd': u'allow rwx', 'mon': u'allow rw', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'name': u'client.radosgw'}) => {\"changed\": true, \"checksum\": \"e69e019107a5ba0f730c2d85403f54a4f0dd5e61\", \"dest\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/78ace352-763a-11e8-9c1d-525400166144/etc/ceph/ceph.client.radosgw.keyring\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"md5sum\": \"a5f4e837b2107c38fbb797ad153986ba\", \"remote_checksum\": \"e69e019107a5ba0f730c2d85403f54a4f0dd5e61\", \"remote_md5sum\": null}", "", "TASK [ceph-osd : copy to other mons the openstack cephx key(s)] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:71", "Monday 25 June 2018 06:08:13 -0400 (0:00:01.533) 0:03:34.664 *********** ", "changed: [ceph-0 -> 192.168.24.14] => (item=[u'controller-0', {'name': u'client.openstack', 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mgr': u'allow *', 'mon': u'allow r'}}]) => {\"changed\": true, \"checksum\": \"56011607b6d88d1e1f856a2666ca00634ee8af81\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.openstack.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 299, \"state\": \"file\", \"uid\": 167}", "changed: [ceph-0 -> 192.168.24.14] => (item=[u'controller-0', {'mode': u'0600', 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mgr': u'allow *', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"'}}]) => {\"changed\": true, \"checksum\": \"c017bc60396016c3f00762471b81bf9c6cd4b443\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.manila.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 276, \"state\": \"file\", \"uid\": 167}", "changed: [ceph-0 -> 192.168.24.14] => (item=[u'controller-0', {'name': u'client.radosgw', 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'caps': {'mds': u'', 'osd': u'allow rwx', 'mgr': u'allow *', 'mon': u'allow rw'}}]) => {\"changed\": true, \"checksum\": \"e69e019107a5ba0f730c2d85403f54a4f0dd5e61\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 149, \"state\": \"file\", \"uid\": 167}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Monday 25 June 2018 06:08:19 -0400 (0:00:05.914) 0:03:40.579 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Monday 25 June 2018 06:08:19 -0400 (0:00:00.069) 0:03:40.648 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Monday 25 June 2018 06:08:19 -0400 (0:00:00.043) 0:03:40.692 *********** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Monday 25 June 2018 06:08:19 -0400 (0:00:00.076) 0:03:40.769 *********** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Monday 25 June 2018 06:08:19 -0400 (0:00:00.075) 0:03:40.844 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Monday 25 June 2018 06:08:19 -0400 (0:00:00.062) 0:03:40.906 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Monday 25 June 2018 06:08:19 -0400 (0:00:00.060) 0:03:40.967 *********** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"9a770971b362c519fc75c5228fc22dd8d4cc68aa\", \"dest\": \"/tmp/restart_osd_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"c42d82e9b9c002f16b40c524607c38ea\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 3060, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921299.64-250500451455531/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Monday 25 June 2018 06:08:21 -0400 (0:00:02.288) 0:03:43.256 *********** ", "skipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Monday 25 June 2018 06:08:21 -0400 (0:00:00.079) 0:03:43.335 *********** ", "skipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.084) 0:03:43.419 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.064) 0:03:43.484 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.064) 0:03:43.549 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.039) 0:03:43.588 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.050) 0:03:43.639 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.046) 0:03:43.685 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.062) 0:03:43.748 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.064) 0:03:43.812 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.040) 0:03:43.853 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.047) 0:03:43.900 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.045) 0:03:43.945 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.059) 0:03:44.005 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.063) 0:03:44.069 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.040) 0:03:44.109 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.046) 0:03:44.156 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.047) 0:03:44.204 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.059) 0:03:44.263 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.060) 0:03:44.324 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Monday 25 June 2018 06:08:22 -0400 (0:00:00.040) 0:03:44.365 *********** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.071) 0:03:44.437 *********** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.076) 0:03:44.513 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph osd install 'Complete'] *****************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:156", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.083) 0:03:44.597 *********** ", "ok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"end\": \"20180625060823Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY [mdss] ********************************************************************", "skipping: no hosts matched", "", "PLAY [rgws] ********************************************************************", "skipping: no hosts matched", "", "PLAY [nfss] ********************************************************************", "skipping: no hosts matched", "", "PLAY [rbdmirrors] **************************************************************", "skipping: no hosts matched", "", "PLAY [restapis] ****************************************************************", "skipping: no hosts matched", "", "PLAY [clients] *****************************************************************", "", "TASK [set ceph client install 'In Progress'] ***********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:307", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.147) 0:03:44.744 *********** ", "ok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"start\": \"20180625060823Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.074) 0:03:44.819 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.044) 0:03:44.863 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.044) 0:03:44.908 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.043) 0:03:44.951 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.048) 0:03:45.000 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.042) 0:03:45.043 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.041) 0:03:45.085 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.044) 0:03:45.130 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.040) 0:03:45.171 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.038) 0:03:45.209 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.041) 0:03:45.250 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.041) 0:03:45.292 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.043) 0:03:45.335 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Monday 25 June 2018 06:08:23 -0400 (0:00:00.048) 0:03:45.384 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.044) 0:03:45.429 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.475 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.520 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.046) 0:03:45.567 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.612 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.658 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.703 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.046) 0:03:45.749 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.045) 0:03:45.795 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.186) 0:03:45.982 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.042) 0:03:46.024 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.044) 0:03:46.069 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.040) 0:03:46.110 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.043) 0:03:46.153 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Monday 25 June 2018 06:08:24 -0400 (0:00:00.040) 0:03:46.194 *********** ", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Monday 25 June 2018 06:08:25 -0400 (0:00:00.526) 0:03:46.721 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Monday 25 June 2018 06:08:25 -0400 (0:00:00.070) 0:03:46.791 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Monday 25 June 2018 06:08:25 -0400 (0:00:00.072) 0:03:46.863 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Monday 25 June 2018 06:08:25 -0400 (0:00:00.071) 0:03:46.935 *********** ", "ok: [compute-0 -> 192.168.24.14] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Monday 25 June 2018 06:08:25 -0400 (0:00:00.133) 0:03:47.068 *********** ", "ok: [compute-0 -> 192.168.24.14] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.366285\", \"end\": \"2018-06-25 10:08:26.669548\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:08:26.303263\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"78ace352-763a-11e8-9c1d-525400166144\", \"stdout_lines\": [\"78ace352-763a-11e8-9c1d-525400166144\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Monday 25 June 2018 06:08:26 -0400 (0:00:00.897) 0:03:47.966 *********** ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Monday 25 June 2018 06:08:26 -0400 (0:00:00.178) 0:03:48.145 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Monday 25 June 2018 06:08:26 -0400 (0:00:00.046) 0:03:48.191 *********** ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Monday 25 June 2018 06:08:26 -0400 (0:00:00.178) 0:03:48.369 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"fsid\": \"78ace352-763a-11e8-9c1d-525400166144\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.072) 0:03:48.442 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.076) 0:03:48.519 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.048) 0:03:48.568 *********** ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 78ace352-763a-11e8-9c1d-525400166144 | tee /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.180) 0:03:48.748 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.040) 0:03:48.788 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.036) 0:03:48.825 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"mds_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.066) 0:03:48.892 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.040) 0:03:48.932 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.067) 0:03:49.000 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.068) 0:03:49.068 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.068) 0:03:49.137 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.042) 0:03:49.179 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.041) 0:03:49.221 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.046) 0:03:49.267 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.039) 0:03:49.307 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.040) 0:03:49.348 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Monday 25 June 2018 06:08:27 -0400 (0:00:00.041) 0:03:49.390 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Monday 25 June 2018 06:08:28 -0400 (0:00:00.041) 0:03:49.432 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Monday 25 June 2018 06:08:28 -0400 (0:00:00.073) 0:03:49.505 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Monday 25 June 2018 06:08:28 -0400 (0:00:00.068) 0:03:49.574 *********** ", "changed: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Monday 25 June 2018 06:08:33 -0400 (0:00:05.487) 0:03:55.062 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Monday 25 June 2018 06:08:33 -0400 (0:00:00.046) 0:03:55.108 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Monday 25 June 2018 06:08:33 -0400 (0:00:00.044) 0:03:55.153 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Monday 25 June 2018 06:08:33 -0400 (0:00:00.043) 0:03:55.197 *********** ", "ok: [compute-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [compute-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Monday 25 June 2018 06:08:34 -0400 (0:00:00.997) 0:03:56.194 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Monday 25 June 2018 06:08:34 -0400 (0:00:00.082) 0:03:56.277 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Monday 25 June 2018 06:08:34 -0400 (0:00:00.044) 0:03:56.322 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026815\", \"end\": \"2018-06-25 10:08:35.578523\", \"rc\": 0, \"start\": \"2018-06-25 10:08:35.551708\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Monday 25 June 2018 06:08:35 -0400 (0:00:00.543) 0:03:56.866 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Monday 25 June 2018 06:08:35 -0400 (0:00:00.069) 0:03:56.936 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-compute-0\"], \"delta\": \"0:00:00.027011\", \"end\": \"2018-06-25 10:08:36.182188\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:08:36.155177\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.539) 0:03:57.475 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.050) 0:03:57.525 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.059) 0:03:57.585 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.047) 0:03:57.633 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.052) 0:03:57.685 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.049) 0:03:57.734 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.053) 0:03:57.787 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.043) 0:03:57.830 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.052) 0:03:57.883 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.048) 0:03:57.932 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.051) 0:03:57.983 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.049) 0:03:58.033 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.046) 0:03:58.080 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.046) 0:03:58.126 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.053) 0:03:58.180 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.048) 0:03:58.228 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.046) 0:03:58.275 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.044) 0:03:58.320 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Monday 25 June 2018 06:08:36 -0400 (0:00:00.046) 0:03:58.367 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.046) 0:03:58.413 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.053) 0:03:58.466 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:58.512 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.044) 0:03:58.557 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.042) 0:03:58.600 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.043) 0:03:58.644 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.043) 0:03:58.687 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.054) 0:03:58.741 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.048) 0:03:58.790 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:58.835 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:58.881 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:58.926 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.046) 0:03:58.973 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.051) 0:03:59.024 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.042) 0:03:59.067 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.043) 0:03:59.110 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Monday 25 June 2018 06:08:37 -0400 (0:00:00.045) 0:03:59.156 *********** ", "ok: [compute-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.612840\", \"end\": \"2018-06-25 10:08:55.118463\", \"rc\": 0, \"start\": \"2018-06-25 10:08:38.505623\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Monday 25 June 2018 06:08:55 -0400 (0:00:17.263) 0:04:16.419 *********** ", "changed: [compute-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.032251\", \"end\": \"2018-06-25 10:08:55.698144\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-25 10:08:55.665893\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/05438dd2dbf4147ffea1c9670683cc73289a1beb5e335366ceae49581e9db966/diff:/var/lib/docker/overlay2/27250b41116c2d1bd1da05a6caaf5ee1d1219a89b4bc38979fd90c93eff8c02e/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/05438dd2dbf4147ffea1c9670683cc73289a1beb5e335366ceae49581e9db966/diff:/var/lib/docker/overlay2/27250b41116c2d1bd1da05a6caaf5ee1d1219a89b4bc38979fd90c93eff8c02e/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/4c742a6925035635f27634794817fadc957e2b1327b3bac549537147a3e8a285/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Monday 25 June 2018 06:08:55 -0400 (0:00:00.586) 0:04:17.006 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Monday 25 June 2018 06:08:55 -0400 (0:00:00.083) 0:04:17.089 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Monday 25 June 2018 06:08:55 -0400 (0:00:00.047) 0:04:17.136 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Monday 25 June 2018 06:08:55 -0400 (0:00:00.046) 0:04:17.183 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Monday 25 June 2018 06:08:55 -0400 (0:00:00.047) 0:04:17.230 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Monday 25 June 2018 06:08:55 -0400 (0:00:00.054) 0:04:17.285 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Monday 25 June 2018 06:08:55 -0400 (0:00:00.047) 0:04:17.332 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Monday 25 June 2018 06:08:55 -0400 (0:00:00.048) 0:04:17.381 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Monday 25 June 2018 06:08:56 -0400 (0:00:00.053) 0:04:17.435 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Monday 25 June 2018 06:08:56 -0400 (0:00:00.050) 0:04:17.486 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Monday 25 June 2018 06:08:56 -0400 (0:00:00.047) 0:04:17.533 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Monday 25 June 2018 06:08:56 -0400 (0:00:00.053) 0:04:17.586 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Monday 25 June 2018 06:08:56 -0400 (0:00:00.047) 0:04:17.634 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.573659\", \"end\": \"2018-06-25 10:08:57.422010\", \"rc\": 0, \"start\": \"2018-06-25 10:08:56.848351\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Monday 25 June 2018 06:08:57 -0400 (0:00:01.080) 0:04:18.715 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Monday 25 June 2018 06:08:57 -0400 (0:00:00.079) 0:04:18.794 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Monday 25 June 2018 06:08:57 -0400 (0:00:00.047) 0:04:18.842 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Monday 25 June 2018 06:08:57 -0400 (0:00:00.043) 0:04:18.885 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Monday 25 June 2018 06:08:57 -0400 (0:00:00.073) 0:04:18.959 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Monday 25 June 2018 06:08:57 -0400 (0:00:00.051) 0:04:19.010 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Monday 25 June 2018 06:08:57 -0400 (0:00:00.048) 0:04:19.058 *********** ", "changed: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Monday 25 June 2018 06:09:00 -0400 (0:00:02.538) 0:04:21.597 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Monday 25 June 2018 06:09:00 -0400 (0:00:00.043) 0:04:21.640 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Monday 25 June 2018 06:09:00 -0400 (0:00:00.043) 0:04:21.683 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Monday 25 June 2018 06:09:00 -0400 (0:00:00.051) 0:04:21.735 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Monday 25 June 2018 06:09:00 -0400 (0:00:00.043) 0:04:21.779 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Monday 25 June 2018 06:09:00 -0400 (0:00:00.042) 0:04:21.821 *********** ", "changed: [compute-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Monday 25 June 2018 06:09:00 -0400 (0:00:00.533) 0:04:22.354 *********** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for compute-0", "changed: [compute-0] => {\"changed\": true, \"checksum\": \"743848637000cc874025cc6ea8e3f15a09c4d9b7\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"70f96443c5883f06f4a1fd0921fced2c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 978, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921341.01-55421071371481/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Monday 25 June 2018 06:09:04 -0400 (0:00:03.202) 0:04:25.557 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : copy ceph admin keyring when non containerized deployment] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml:2", "Monday 25 June 2018 06:09:04 -0400 (0:00:00.044) 0:04:25.602 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : set_fact keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:2", "Monday 25 June 2018 06:09:04 -0400 (0:00:00.040) 0:04:25.642 *********** ", "ok: [compute-0] => (item={u'mon_cap': u'allow r', u'name': u'client.openstack', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r\", \"name\": \"client.openstack\", \"osd_cap\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}}", "ok: [compute-0] => (item={u'mon_cap': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', u'osd_cap': u'allow rw'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mds_cap\": \"allow *\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"name\": \"client.manila\", \"osd_cap\": \"allow rw\"}}", "ok: [compute-0] => (item={u'mon_cap': u'allow rw', u'name': u'client.radosgw', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', u'osd_cap': u'allow rwx'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false, \"item\": {\"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow rw\", \"name\": \"client.radosgw\", \"osd_cap\": \"allow rwx\"}}", "", "TASK [ceph-client : set_fact keys - override keys_tmp with keys] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:9", "Monday 25 June 2018 06:09:04 -0400 (0:00:00.113) 0:04:25.756 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"keys\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false}", "", "TASK [ceph-client : run a dummy container (sleep 300) from where we can create pool(s)/key(s)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:15", "Monday 25 June 2018 06:09:04 -0400 (0:00:00.074) 0:04:25.830 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"-d\", \"-v\", \"/etc/ceph:/etc/ceph:z\", \"--name\", \"ceph-create-keys\", \"--entrypoint=sleep\", \"192.168.24.1:8787/rhceph:3-6\", \"300\"], \"delta\": \"0:00:00.319928\", \"end\": \"2018-06-25 10:09:05.411868\", \"rc\": 0, \"start\": \"2018-06-25 10:09:05.091940\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"9de5dd85f987537801e8d6bf9e2202567ee5544ec635e840fe241fe5d8247841\", \"stdout_lines\": [\"9de5dd85f987537801e8d6bf9e2202567ee5544ec635e840fe241fe5d8247841\"]}", "", "TASK [ceph-client : set_fact delegated_node] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:30", "Monday 25 June 2018 06:09:05 -0400 (0:00:00.878) 0:04:26.709 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"delegated_node\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-client : set_fact condition_copy_admin_key] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:34", "Monday 25 June 2018 06:09:05 -0400 (0:00:00.075) 0:04:26.785 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"condition_copy_admin_key\": true}, \"changed\": false}", "", "TASK [ceph-client : set_fact docker_exec_cmd] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:38", "Monday 25 June 2018 06:09:05 -0400 (0:00:00.074) 0:04:26.859 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0 \"}, \"changed\": false}", "", "TASK [ceph-client : create cephx key(s)] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:44", "Monday 25 June 2018 06:09:05 -0400 (0:00:00.137) 0:04:26.996 *********** ", "changed: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mon': u\"'allow r'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.openstack.keyring\", \"--name\", \"client.openstack\", \"--add-key\", \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"--cap\", \"mds\", \"''\", \"--cap\", \"osd\", \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow r'\"], \"delta\": \"0:00:00.152629\", \"end\": \"2018-06-25 10:09:06.393746\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-06-25 10:09:06.241117\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.openstack.keyring\\nadded entity client.openstack auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.openstack.keyring\", \"added entity client.openstack auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw== with 0 caps)\"]}", "changed: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\'', 'mgr': u\"'allow *'\"}, 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'mode': u'0600'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.manila.keyring\", \"--name\", \"client.manila\", \"--add-key\", \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"--cap\", \"mds\", \"'allow *'\", \"--cap\", \"osd\", \"'allow rw'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\"], \"delta\": \"0:00:00.153805\", \"end\": \"2018-06-25 10:09:07.009700\", \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-06-25 10:09:06.855895\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.manila.keyring\\nadded entity client.manila auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.manila.keyring\", \"added entity client.manila auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ== with 0 caps)\"]}", "changed: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mon': u\"'allow rw'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.radosgw.keyring\", \"--name\", \"client.radosgw\", \"--add-key\", \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"--cap\", \"mds\", \"''\", \"--cap\", \"osd\", \"'allow rwx'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow rw'\"], \"delta\": \"0:00:00.150642\", \"end\": \"2018-06-25 10:09:07.621496\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-06-25 10:09:07.470854\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.radosgw.keyring\\nadded entity client.radosgw auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.radosgw.keyring\", \"added entity client.radosgw auth auth(auid = 18446744073709551615 key=AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg== with 0 caps)\"]}", "", "TASK [ceph-client : slurp client cephx key(s)] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:62", "Monday 25 June 2018 06:09:07 -0400 (0:00:01.948) 0:04:28.945 *********** ", "ok: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mon': u\"'allow r'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'name': u'client.openstack'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBZHpNQW44R2pObmtwMEdoNWJTOElNdz09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}", "ok: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\'', 'mgr': u\"'allow *'\"}, 'name': u'client.manila', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'mode': u'0600'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBSDJvM2wxL0JLU0VHVVRVR3B0OEZIUT09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}", "ok: [compute-0 -> 192.168.24.14] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mon': u\"'allow rw'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'name': u'client.radosgw'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDbEpTMWJBQUFBQUJBQVJCUEJLZ1pseGh4SXJ6RlM5RnVlUmc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}", "", "TASK [ceph-client : list existing pool(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:74", "Monday 25 June 2018 06:09:08 -0400 (0:00:01.398) 0:04:30.343 *********** ", "", "TASK [ceph-client : create ceph pool(s)] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:86", "Monday 25 June 2018 06:09:08 -0400 (0:00:00.043) 0:04:30.386 *********** ", "", "TASK [ceph-client : kill a dummy container that created pool(s)/key(s)] ********", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:109", "Monday 25 June 2018 06:09:09 -0400 (0:00:00.049) 0:04:30.436 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"rm\", \"-f\", \"ceph-create-keys\"], \"delta\": \"0:00:00.149063\", \"end\": \"2018-06-25 10:09:09.842587\", \"rc\": 0, \"start\": \"2018-06-25 10:09:09.693524\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph-create-keys\", \"stdout_lines\": [\"ceph-create-keys\"]}", "", "TASK [ceph-client : get client cephx keys] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:116", "Monday 25 June 2018 06:09:09 -0400 (0:00:00.699) 0:04:31.135 *********** ", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBZHpNQW44R2pObmtwMEdoNWJTOElNdz09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.openstack.keyring', 'item': {'mode': u'0600', 'name': u'client.openstack', 'key': u'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==', 'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mgr': u\"'allow *'\", 'mon': u\"'allow r'\"}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.openstack.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"e8c02c06312fda4c2590d332d52c324f8cc7ee59\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBZHpNQW44R2pObmtwMEdoNWJTOElNdz09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.openstack.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}, \"md5sum\": \"3320618afc06e58268928352e4f18a11\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 307, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921349.83-43059450388249/source\", \"state\": \"file\", \"uid\": 167}", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBSDJvM2wxL0JLU0VHVVRVR3B0OEZIUT09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=', 'failed': False, u'source': u'/etc/ceph/ceph.client.manila.keyring', 'item': {'name': u'client.manila', 'mode': u'0600', 'key': u'AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==', 'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mgr': u\"'allow *'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\''}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.manila.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"21419c0962bd0ff32415ed84be15002db21af2d5\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUNsSlMxYkFBQUFBQkFBSDJvM2wxL0JLU0VHVVRVR3B0OEZIUT09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.manila.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQClJS1bAAAAABAAH2o3l1/BKSEGUTUGpt8FHQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}, \"md5sum\": \"101079d72332a3a117d5dae184f86fd7\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 284, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921352.44-130021033914542/source\", \"state\": \"file\", \"uid\": 167}", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDbEpTMWJBQUFBQUJBQVJCUEJLZ1pseGh4SXJ6RlM5RnVlUmc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.radosgw.keyring', 'item': {'mode': u'0600', 'name': u'client.radosgw', 'key': u'AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==', 'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mgr': u\"'allow *'\", 'mon': u\"'allow rw'\"}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.radosgw.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.14'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"f731a8adae069c294e3019338ac5e8a6703c7065\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDbEpTMWJBQUFBQUJBQVJCUEJLZ1pseGh4SXJ6RlM5RnVlUmc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.radosgw.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQClJS1bAAAAABAARBPBKgZlxhxIrzFS9FueRg==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}, \"md5sum\": \"494a95924e2c9bb292694df2b83f8e2b\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 157, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921355.07-31406371722383/source\", \"state\": \"file\", \"uid\": 167}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Monday 25 June 2018 06:09:17 -0400 (0:00:07.855) 0:04:38.991 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Monday 25 June 2018 06:09:17 -0400 (0:00:00.065) 0:04:39.056 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Monday 25 June 2018 06:09:17 -0400 (0:00:00.041) 0:04:39.098 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Monday 25 June 2018 06:09:17 -0400 (0:00:00.075) 0:04:39.174 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Monday 25 June 2018 06:09:17 -0400 (0:00:00.074) 0:04:39.249 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Monday 25 June 2018 06:09:17 -0400 (0:00:00.068) 0:04:39.317 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Monday 25 June 2018 06:09:17 -0400 (0:00:00.066) 0:04:39.383 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.045) 0:04:39.428 *********** ", "skipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.071) 0:04:39.500 *********** ", "skipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.074) 0:04:39.575 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.064) 0:04:39.639 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.065) 0:04:39.705 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.040) 0:04:39.745 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.052) 0:04:39.797 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.052) 0:04:39.850 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.062) 0:04:39.912 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.062) 0:04:39.975 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.040) 0:04:40.015 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.047) 0:04:40.063 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.047) 0:04:40.110 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.063) 0:04:40.174 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.064) 0:04:40.238 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.041) 0:04:40.279 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.049) 0:04:40.328 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Monday 25 June 2018 06:09:18 -0400 (0:00:00.047) 0:04:40.376 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Monday 25 June 2018 06:09:19 -0400 (0:00:00.062) 0:04:40.438 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Monday 25 June 2018 06:09:19 -0400 (0:00:00.065) 0:04:40.503 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Monday 25 June 2018 06:09:19 -0400 (0:00:00.038) 0:04:40.542 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Monday 25 June 2018 06:09:19 -0400 (0:00:00.079) 0:04:40.622 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Monday 25 June 2018 06:09:19 -0400 (0:00:00.076) 0:04:40.698 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph client install 'Complete'] **************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:324", "Monday 25 June 2018 06:09:19 -0400 (0:00:00.087) 0:04:40.786 *********** ", "ok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"end\": \"20180625060919Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=88 changed=18 unreachable=0 failed=0 ", "compute-0 : ok=57 changed=7 unreachable=0 failed=0 ", "controller-0 : ok=119 changed=20 unreachable=0 failed=0 ", "", "", "INSTALLER STATUS ***************************************************************", "Install Ceph Monitor : Complete (0:01:09)", "Install Ceph Manager : Complete (0:00:40)", "Install Ceph OSD : Complete (0:01:45)", "Install Ceph Client : Complete (0:00:56)", "", "Monday 25 June 2018 06:09:19 -0400 (0:00:00.056) 0:04:40.842 *********** ", "=============================================================================== ", "ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 17.88s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----", "ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 17.38s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----", "ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 17.26s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----", "gather and delegate facts ----------------------------------------------- 9.29s", "/usr/share/ceph-ansible/site-docker.yml.sample:29 -----------------------------", "ceph-client : get client cephx keys ------------------------------------- 7.86s", "/usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:116 -----", "ceph-osd : create openstack pool(s) ------------------------------------- 7.55s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21 ----------", "ceph-osd : prepare ceph containerized osd disk collocated --------------- 7.48s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5 -------", "ceph-osd : assign application to pool(s) -------------------------------- 5.99s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:41 ----------", "ceph-osd : copy to other mons the openstack cephx key(s) ---------------- 5.91s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:71 ----------", "ceph-defaults : create ceph initial directories ------------------------- 5.76s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 ", "ceph-defaults : create ceph initial directories ------------------------- 5.68s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 ", "ceph-defaults : create ceph initial directories ------------------------- 5.49s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 ", "ceph-defaults : create ceph initial directories ------------------------- 4.99s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 ", "ceph-osd : list existing pool(s) ---------------------------------------- 4.34s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12 ----------", "ceph-osd : create openstack cephx key(s) -------------------------------- 4.34s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:50 ----------", "ceph-config : generate ceph.conf configuration file --------------------- 3.41s", "/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------", "ceph-config : generate ceph.conf configuration file --------------------- 3.20s", "/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------", "ceph-mon : push ceph files to the ansible server ------------------------ 3.03s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2 -------", "ceph-config : generate ceph.conf configuration file --------------------- 3.03s", "/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------", "ceph-mgr : generate systemd unit file ----------------------------------- 2.98s", "/usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2 ----"]} >2018-06-25 06:09:19,814 p=25239 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-25 06:09:19,836 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:19,854 p=25239 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-25 06:09:19,872 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:19,888 p=25239 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-25 06:09:19,908 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:19,925 p=25239 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-25 06:09:19,942 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:19,960 p=25239 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-25 06:09:19,977 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:19,995 p=25239 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-25 06:09:20,015 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,034 p=25239 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-25 06:09:20,053 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,071 p=25239 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-25 06:09:20,091 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,097 p=25239 u=mistral | PLAY [Overcloud deploy step tasks for 2] *************************************** >2018-06-25 06:09:20,121 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:09:20,174 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,175 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,187 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,208 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:09:20,261 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,267 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,274 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,298 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:09:20,369 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,382 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,387 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,413 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:09:20,494 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,518 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,519 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,543 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:09:20,576 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,603 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,618 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,625 p=25239 u=mistral | PLAY [Overcloud common deploy step tasks 2] ************************************ >2018-06-25 06:09:20,654 p=25239 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-25 06:09:20,687 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,716 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,730 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,754 p=25239 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-25 06:09:20,820 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,821 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,846 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,868 p=25239 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-25 06:09:20,937 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,938 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,948 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:20,970 p=25239 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-25 06:09:21,022 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,023 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,035 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,058 p=25239 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-25 06:09:21,086 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,113 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,125 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,186 p=25239 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-25 06:09:21,225 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,259 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,273 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,305 p=25239 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-25 06:09:21,373 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,378 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,379 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,379 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,381 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': 'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,383 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': 'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,386 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,423 p=25239 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-25 06:09:21,460 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,461 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,494 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,495 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,496 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,496 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,499 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,500 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,505 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,505 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,511 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,512 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,522 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,526 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,532 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,537 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,543 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,547 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,573 p=25239 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-25 06:09:21,603 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,630 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,643 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:21,666 p=25239 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-25 06:09:21,695 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,721 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,733 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,754 p=25239 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-25 06:09:21,815 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,818 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,823 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,837 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB', u'DB_ROOT_PASSWORD=ufdBL6tH5c'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB", "DB_ROOT_PASSWORD=ufdBL6tH5c"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,857 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'fLWtJZCynkwHz2bnZopp1aRC2'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "fLWtJZCynkwHz2bnZopp1aRC2"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,880 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,885 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,889 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,892 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,911 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,934 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:21,948 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,002 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,002 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,003 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,003 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,004 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,005 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,026 p=25239 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-25 06:09:22,053 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,078 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,092 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,113 p=25239 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-25 06:09:22,185 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,189 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,190 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,200 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,201 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,202 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,207 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,213 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,215 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,289 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,293 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,298 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,302 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,307 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,315 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,316 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,322 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,332 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,334 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,336 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,338 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,343 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,346 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,351 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,355 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,358 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,362 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,367 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,371 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,374 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,380 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,384 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,388 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,396 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,396 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,400 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,405 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,415 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,416 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,419 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,428 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,428 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,430 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,434 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,439 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,443 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,448 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,452 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,457 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,462 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,466 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,470 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,475 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,478 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,483 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,487 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,491 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,496 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,500 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,505 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,510 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,513 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,518 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,522 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,526 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,530 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,535 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,539 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,543 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,547 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,551 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,555 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,560 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,567 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,610 p=25239 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-25 06:09:22,622 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:09:22,646 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:09:22,673 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:09:22,697 p=25239 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-25 06:09:22,751 p=25239 u=mistral | skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,787 p=25239 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-25 06:09:22,815 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,839 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,852 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:09:22,874 p=25239 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-25 06:09:23,650 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921362.97-188687515318370/source", "state": "file", "uid": 0} >2018-06-25 06:09:23,651 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921362.91-96219426550443/source", "state": "file", "uid": 0} >2018-06-25 06:09:23,663 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921362.94-63246268324592/source", "state": "file", "uid": 0} >2018-06-25 06:09:23,687 p=25239 u=mistral | TASK [Run puppet host configuration for step 2] ******************************** >2018-06-25 06:09:32,849 p=25239 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:09:33,123 p=25239 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:09:37,196 p=25239 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:09:37,219 p=25239 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 2] *** >2018-06-25 06:09:37,271 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.80 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller2]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/mode: mode changed '0400' to '0640'", > "Notice: Applied catalog in 3.66 seconds", > "Changes:", > " Total: 4", > "Events:", > " Success: 4", > "Resources:", > " Corrective change: 2", > " Total: 217", > " Out of sync: 4", > " Changed: 4", > "Time:", > " Concat file: 0.00", > " File line: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.00", > " Augeas: 0.02", > " Firewall: 0.02", > " File: 0.11", > " Service: 0.13", > " Package: 0.33", > " Pcmk property: 0.35", > " Exec: 0.80", > " Pcmk resource default: 0.97", > " Last run: 1529921376", > " Config retrieval: 3.34", > " Total: 6.08", > " Filebucket: 0.00", > " Concat fragment: 0.00", > "Version:", > " Config: 1529921369", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:09:37,294 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.75 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.49 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 141", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.05", > " Service: 0.14", > " Exec: 0.32", > " Package: 0.36", > " Last run: 1529921372", > " Config retrieval: 2.07", > " Total: 2.98", > " Concat fragment: 0.00", > "Version:", > " Config: 1529921369", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:09:37,316 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.74 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.30 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 135", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " Service: 0.12", > " File: 0.13", > " Package: 0.25", > " Exec: 0.27", > " Last run: 1529921372", > " Config retrieval: 2.00", > " Total: 2.80", > "Version:", > " Config: 1529921369", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:09:37,340 p=25239 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 2] ***************** >2018-06-25 06:09:37,367 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:37,390 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:37,402 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:37,424 p=25239 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 2] *** >2018-06-25 06:09:37,450 p=25239 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:09:37,474 p=25239 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:09:37,487 p=25239 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:09:37,509 p=25239 u=mistral | TASK [Start containers for step 2] ********************************************* >2018-06-25 06:09:38,167 p=25239 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:09:38,177 p=25239 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:11,963 p=25239 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:11,984 p=25239 u=mistral | TASK [Debug output for task which failed: Start containers for step 2] ********* >2018-06-25 06:16:12,089 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-25 06:16:12,094 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-25 06:16:23,463 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "5e7b63a88a76: Already exists", > "5ff72e309cb2: Pulling fs layer", > "5ff72e309cb2: Verifying Checksum", > "5ff72e309cb2: Download complete", > "5ff72e309cb2: Pull complete", > "Digest: sha256:66bdbed6e9d047b6e66b91abd2d4b5be29c06391601b7bbb7af3cac7974e15da", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-engine ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-engine", > "15497368e843: Already exists", > "b539b60217fe: Pulling fs layer", > "b539b60217fe: Verifying Checksum", > "b539b60217fe: Download complete", > "b539b60217fe: Pull complete", > "Digest: sha256:9bcd08156fc092b635fb8385d245b910af8a5e947388653ef3487dea959f5f20", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent", > "ea1d509b6f44: Already exists", > "84e2c5d46617: Pulling fs layer", > "84e2c5d46617: Verifying Checksum", > "84e2c5d46617: Download complete", > "84e2c5d46617: Pull complete", > "Digest: sha256:2b12cb81fb6a7677dac134f3ed7968a8291a4916cb68f30860f975a86eb5b2c7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent", > "7ed3720e5907: Pulling fs layer", > "7ed3720e5907: Verifying Checksum", > "7ed3720e5907: Download complete", > "7ed3720e5907: Pull complete", > "Digest: sha256:1d4798d4eeddf04bbfb28605177aa39be1822e328d94cc562d4b0dd9cd2b72ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", > "stdout: fd0a51016d75935d37d43cc2011ba30a1dbce449b7ae7462f34a8a1b65900a13", > "stdout: ", > "stderr: Error: unable to find resource 'galera-bundle'", > "stdout: f3a224abe917622fc70803feeb64623be378a3404049512927152c587a398873", > "stdout: 64d2349b1e0e2e525cd8d326e7dde01f2b168a4a8980e8679f44e398d1e79143", > "stdout: 9d1c7bfc758e314b7e56125f58e3a7146a00cbc2b7ca013ae45b92e9428ad333", > "stdout: Skipping execution since this is not the bootstrap node for this service.", > "stdout: 62864dd9a9fec443d09ec2785dc7e0525ecb84ed823f9318898c75e3d1d96263", > "stdout: 21352611ae89d74b7bfa8d22b2cc8d77c80f7fc9853e2416ee4ce3a9fb215478", > "stdout: 9250e9808aedc12c7e70ca33e37b211dde6125318c31d1280c8c54b6a520da3f", > "stdout: 0420a10e485f8352e868e6e34f8382590f1dfa451893ca42566e26f3c88d1e7c", > "stdout: 7aa26523444e621c09ce58efbc7683110bd6bbce9bd876476150413cd42f2568", > "stdout: baa4078f68bf950c940150bb4b9769074302e7918eeb194a164a30078501963d", > "stdout: e57f2828bcaa5101da7283110e9b3d19db72fa595416d43d8b5e0fdde9b8ce4e", > "stdout: 08ff2eb646fa9f6c33386db14fb4b132b80084252c674c3d87c507df3a45956b", > "stdout: Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=US-ASCII", > "Debug: Evicting cache entry for environment 'production'", > "Debug: Caching environment 'production' (ttl = 0 sec)", > "Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d", > "Debug: Loading external facts from /var/lib/puppet/facts.d", > "Info: Loading facts", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /etc/puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /etc/puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /etc/puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /etc/puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Executing: '/usr/bin/rpm --version'", > "Debug: Failed to load library 'cfpropertylist' for feature 'cfpropertylist'", > "Debug: Executing: '/usr/bin/rpm -ql rpm'", > "Debug: Facter: value for agent_specified_environment is still nil", > "Debug: Facter: value for cfkey is still nil", > "Debug: Facter: Found no suitable resolves of 1 for dhcp_servers", > "Debug: Facter: value for dhcp_servers is still nil", > "Debug: Facter: value for ec2_public_ipv4 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for gce", > "Debug: Facter: value for gce is still nil", > "Debug: Facter: value for ipaddress6_br_ex is still nil", > "Debug: Facter: value for ipaddress_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_br_isolated is still nil", > "Debug: Facter: value for netmask_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_docker0 is still nil", > "Debug: Facter: value for ipaddress6_eth0 is still nil", > "Debug: Facter: value for ipaddress_eth1 is still nil", > "Debug: Facter: value for ipaddress6_eth1 is still nil", > "Debug: Facter: value for netmask_eth1 is still nil", > "Debug: Facter: value for ipaddress_eth2 is still nil", > "Debug: Facter: value for ipaddress6_eth2 is still nil", > "Debug: Facter: value for netmask_eth2 is still nil", > "Debug: Facter: value for ipaddress6_lo is still nil", > "Debug: Facter: value for macaddress_lo is still nil", > "Debug: Facter: value for ipaddress_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_ovs_system is still nil", > "Debug: Facter: value for netmask_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_vlan20 is still nil", > "Debug: Facter: value for ipaddress6_vlan30 is still nil", > "Debug: Facter: value for ipaddress6_vlan40 is still nil", > "Debug: Facter: value for ipaddress6_vlan50 is still nil", > "Debug: Facter: value for ipaddress6 is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iphostnumber", > "Debug: Facter: value for iphostnumber is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename", > "Debug: Facter: value for lsbdistcodename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription", > "Debug: Facter: value for lsbdistdescription is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistid", > "Debug: Facter: value for lsbdistid is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease", > "Debug: Facter: value for lsbdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease", > "Debug: Facter: value for lsbmajdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease", > "Debug: Facter: value for lsbminordistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbrelease", > "Debug: Facter: value for lsbrelease is still nil", > "Debug: Facter: Found no suitable resolves of 2 for swapencrypted", > "Debug: Facter: value for swapencrypted is still nil", > "Debug: Facter: value for network_br_isolated is still nil", > "Debug: Facter: value for network_eth1 is still nil", > "Debug: Facter: value for network_eth2 is still nil", > "Debug: Facter: value for network_ovs_system is still nil", > "Debug: Facter: Found no suitable resolves of 1 for processor", > "Debug: Facter: value for processor is still nil", > "Debug: Facter: value for is_rsc is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_region", > "Debug: Facter: value for rsc_region is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id", > "Debug: Facter: value for rsc_instance_id is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_enforced", > "Debug: Facter: value for selinux_enforced is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_policyversion", > "Debug: Facter: value for selinux_policyversion is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_current_mode", > "Debug: Facter: value for selinux_current_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_mode", > "Debug: Facter: value for selinux_config_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_policy", > "Debug: Facter: value for selinux_config_policy is still nil", > "Debug: Facter: value for sshdsakey is still nil", > "Debug: Facter: value for sshfp_dsa is still nil", > "Debug: Facter: value for sshrsakey is still nil", > "Debug: Facter: value for sshfp_rsa is still nil", > "Debug: Facter: value for sshecdsakey is still nil", > "Debug: Facter: value for sshfp_ecdsa is still nil", > "Debug: Facter: value for sshed25519key is still nil", > "Debug: Facter: value for sshfp_ed25519 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for system32", > "Debug: Facter: value for system32 is still nil", > "Debug: Facter: value for vlans is still nil", > "Debug: Facter: Found no suitable resolves of 1 for xendomains", > "Debug: Facter: value for xendomains is still nil", > "Debug: Facter: value for zfs_version is still nil", > "Debug: Facter: Found no suitable resolves of 1 for zonename", > "Debug: Facter: value for zonename is still nil", > "Debug: Facter: value for zpool_version is still nil", > "Debug: Facter: value for java_version is still nil", > "Debug: Facter: value for java_major_version is still nil", > "Debug: Facter: value for java_patch_level is still nil", > "Debug: Facter: value for java_default_home is still nil", > "Debug: Facter: value for java_libjvm_path is still nil", > "Debug: Facter: value for ssh_client_version_full is still nil", > "Debug: Facter: value for ssh_client_version_major is still nil", > "Debug: Facter: value for ssh_client_version_release is still nil", > "Debug: Facter: value for ssh_server_version_full is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_major", > "Debug: Facter: value for ssh_server_version_major is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_release", > "Debug: Facter: value for ssh_server_version_release is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iptables_persistent_version", > "Debug: Facter: value for iptables_persistent_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for staging_windir", > "Debug: Facter: value for staging_windir is still nil", > "Debug: Facter: value for cassandrarelease is still nil", > "Debug: Facter: value for cassandraminorversion is still nil", > "Debug: Facter: value for cassandrapatchversion is still nil", > "Debug: Facter: value for cassandramajorversion is still nil", > "Debug: Facter: value for mysqld_version is still nil", > "Debug: Facter: value for mysql_version is still nil", > "Debug: Facter: value for git_html_path is still nil", > "Debug: Facter: value for git_version is still nil", > "Debug: Facter: value for git_exec_path is still nil", > "Debug: Facter: value for collectd_version is still nil", > "Debug: Facter: value for sssd_version is still nil", > "Debug: Facter: value for rabbitmq_nodename is still nil", > "Debug: Facter: value for rabbitmq_version is still nil", > "Debug: Puppet::Type::Package::ProviderSensu_gem: file /opt/sensu/embedded/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderTdagent: file /opt/td-agent/usr/sbin/td-agent-gem does not exist", > "Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist", > "Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist", > "Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist", > "Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist", > "Debug: Puppet::Type::Package::ProviderDnf: file dnf does not exist", > "Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist", > "Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does not exist", > "Debug: Puppet::Type::Package::ProviderNim: file /usr/sbin/nimclient does not exist", > "Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist", > "Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist", > "Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist", > "Debug: Puppet::Type::Package::ProviderPkgng: file /usr/local/sbin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not exist", > "Debug: Puppet::Type::Package::ProviderPorts: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPuppet_gem: file /opt/puppetlabs/puppet/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist", > "Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist", > "Debug: Puppet::Type::Package::ProviderTdnf: file tdnf does not exist", > "Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist", > "Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist", > "Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist", > "Debug: Facter: value for pe_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_major_version", > "Debug: Facter: value for pe_major_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_minor_version", > "Debug: Facter: value for pe_minor_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_patch_version", > "Debug: Facter: value for pe_patch_version is still nil", > "Debug: Puppet::Type::Service::ProviderNoop: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderInit: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist", > "Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist", > "Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist", > "Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenbsd: file /usr/sbin/rcctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist", > "Debug: Puppet::Type::Service::ProviderRedhat: file /sbin/service does not exist", > "Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist", > "Debug: Puppet::Type::Service::ProviderUpstart: 0 confines (of 4) were true", > "Debug: Facter: value for redis_server_version is still nil", > "Debug: Facter: value for apache_version is still nil", > "Debug: Facter: value for nic_alias is still nil", > "Debug: Facter: value for netmask6_ovs_system is still nil", > "Debug: Facter: value for ovs_uuid is still nil", > "Debug: Facter: value for ovs_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for archive_windir", > "Debug: Facter: value for archive_windir is still nil", > "Debug: Facter: value for sensu_version is still nil", > "Debug: Facter: value for ipa_hostname is still nil", > "Debug: Facter: value for libvirt_uuid is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/pacemaker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::pacemaker from tripleo/profile/base/pacemaker into production", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Debug: hiera(): Hiera JSON backend starting", > "Debug: hiera(): Looking up lookup_options in JSON backend", > "Debug: hiera(): Looking for data source docker", > "Debug: hiera(): Looking for data source heat_config_", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/heat_config_.json, skipping", > "Debug: hiera(): Looking for data source config_step", > "Debug: hiera(): Looking for data source controller_extraconfig", > "Debug: hiera(): Looking for data source extraconfig", > "Debug: hiera(): Looking for data source service_names", > "Debug: hiera(): Looking for data source service_configs", > "Debug: hiera(): Looking for data source controller", > "Debug: hiera(): Looking for data source bootstrap_node", > "Debug: hiera(): Looking for data source all_nodes", > "Debug: hiera(): Looking for data source vip_data", > "Debug: hiera(): Looking for data source net_ip_map", > "Debug: hiera(): Looking for data source RedHat", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/RedHat.json, skipping", > "Debug: hiera(): Looking for data source neutron_bigswitch_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_bigswitch_data.json, skipping", > "Debug: hiera(): Looking for data source neutron_cisco_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_cisco_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_n1kv_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_n1kv_data.json, skipping", > "Debug: hiera(): Looking for data source midonet_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/midonet_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_aci_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_aci_data.json, skipping", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_node_ips in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_authkey in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::encryption in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::enable_instanceha in JSON backend", > "Debug: hiera(): Looking up step in JSON backend", > "Debug: hiera(): Looking up pcs_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_node_ips in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker_cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::instanceha in JSON backend", > "Debug: hiera(): Looking up hacluster_pwd in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_fencing in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_node_names in JSON backend", > "Debug: hiera(): Looking up corosync_ipv6 in JSON backend", > "Debug: hiera(): Looking up corosync_token_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/init.pp' in environment production", > "Debug: Automatically imported pacemaker from pacemaker into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/params.pp' in environment production", > "Debug: Automatically imported pacemaker::params from pacemaker/params into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/install.pp' in environment production", > "Debug: Automatically imported pacemaker::install from pacemaker/install into production", > "Debug: hiera(): Looking up pacemaker::install::ensure in JSON backend", > "Debug: Resource package[pacemaker] was not determined to be defined", > "Debug: Create new resource package[pacemaker] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pcs] was not determined to be defined", > "Debug: Create new resource package[pcs] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[fence-agents-all] was not determined to be defined", > "Debug: Create new resource package[fence-agents-all] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pacemaker-libs] was not determined to be defined", > "Debug: Create new resource package[pacemaker-libs] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/service.pp' in environment production", > "Debug: Automatically imported pacemaker::service from pacemaker/service into production", > "Debug: hiera(): Looking up pacemaker::service::ensure in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasstatus in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasrestart in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/corosync.pp' in environment production", > "Debug: Automatically imported pacemaker::corosync from pacemaker/corosync into production", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_members_rrp in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_name in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::manage_fw in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::pcsd_debug in JSON backend", > "Debug: template[inline]: Bound template variables for inline template in 0.00 seconds", > "Debug: template[inline]: Interpolated template inline template in 0.00 seconds", > "Debug: hiera(): Looking up docker_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/systemd/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/systemctl/daemon_reload.pp' in environment production", > "Debug: Automatically imported systemd::systemctl::daemon_reload from systemd/systemctl/daemon_reload into production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/unit_file.pp' in environment production", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/init.pp' in environment production", > "Debug: Automatically imported systemd::unit_file from systemd/unit_file into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/stonith.pp' in environment production", > "Debug: Automatically imported pacemaker::stonith from pacemaker/stonith into production", > "Debug: hiera(): Looking up pacemaker::stonith::try_sleep in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/property.pp' in environment production", > "Debug: Automatically imported pacemaker::property from pacemaker/property into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource_defaults.pp' in environment production", > "Debug: Automatically imported pacemaker::resource_defaults from pacemaker/resource_defaults into production", > "Debug: hiera(): Looking up pacemaker::resource_defaults::defaults in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::post_success_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::verify_on_create in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/rabbitmq_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::rabbitmq_bundle from tripleo/profile/pacemaker/rabbitmq_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::erlang_cookie in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::user_ha_queues in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::erlang_cookie in JSON backend", > "Debug: hiera(): Looking up rabbitmq::nr_ha_queues in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_node_names in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_node_names in JSON backend", > "Debug: hiera(): Looking up enable_internal_tls in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/rabbitmq.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::rabbitmq from tripleo/profile/base/rabbitmq into production", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::config_variables in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::environment in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::ssl_versions in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::inter_node_ciphers in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::inet_dist_interface in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::ipv6 in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::kernel_variables in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rabbitmq_pass in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rabbitmq_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::stack_action in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::step in JSON backend", > "Debug: hiera(): Looking up rabbitmq_config_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq_environment in JSON backend", > "Debug: hiera(): Looking up rabbitmq::interface in JSON backend", > "Debug: hiera(): Looking up internal_api in JSON backend", > "Debug: hiera(): Looking up rabbit_ipv6 in JSON backend", > "Debug: hiera(): Looking up rabbitmq_kernel_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::default_pass in JSON backend", > "Debug: hiera(): Looking up rabbitmq::default_user in JSON backend", > "Debug: hiera(): Looking up stack_action in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/init.pp' in environment production", > "Debug: Automatically imported rabbitmq from rabbitmq into production", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/params.pp' in environment production", > "Debug: Automatically imported rabbitmq::params from rabbitmq/params into production", > "Debug: hiera(): Looking up rabbitmq::admin_enable in JSON backend", > "Debug: hiera(): Looking up rabbitmq::cluster_node_type in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_ranch in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_stomp in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_shovel in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_shovel_statics in JSON backend", > "Debug: hiera(): Looking up rabbitmq::delete_guest_user in JSON backend", > "Debug: hiera(): Looking up rabbitmq::env_config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::env_config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_ip_address in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_hostname in JSON backend", > "Debug: hiera(): Looking up rabbitmq::node_ip_address in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_apt_pin in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_gpg_key in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_name in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_source in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_provider in JSON backend", > "Debug: hiera(): Looking up rabbitmq::repos_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::manage_python in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_user in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_group in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_home in JSON backend", > "Debug: hiera(): Looking up rabbitmq::port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_keepalive in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_backlog in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_sndbuf in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_recbuf in JSON backend", > "Debug: hiera(): Looking up rabbitmq::heartbeat in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_name in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_only in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cacert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_key in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_depth in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cert_password in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_interface in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_stomp_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_verify in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_fail_if_no_peer_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_verify in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_fail_if_no_peer_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_versions in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_secure_renegotiate in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_reuse_sessions in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_honor_cipher_order in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_dhfile in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_ciphers in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_auth in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_server in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_user_dn_pattern in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_other_bind in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_use_ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_log in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_config_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_ssl_only in JSON backend", > "Debug: hiera(): Looking up rabbitmq::wipe_db_on_cookie_change in JSON backend", > "Debug: hiera(): Looking up rabbitmq::cluster_partition_handling in JSON backend", > "Debug: hiera(): Looking up rabbitmq::file_limit in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_management_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_additional_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::auth_backends in JSON backend", > "Debug: hiera(): Looking up rabbitmq::key_content in JSON backend", > "Debug: hiera(): Looking up rabbitmq::collect_statistics_interval in JSON backend", > "Debug: hiera(): Looking up rabbitmq::inetrc_config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::inetrc_config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_erl_dist in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmqadmin_package in JSON backend", > "Debug: hiera(): Looking up rabbitmq::archive_options in JSON backend", > "Debug: hiera(): Looking up rabbitmq::loopback_users in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/install.pp' in environment production", > "Debug: Automatically imported rabbitmq::install from rabbitmq/install into production", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/config.pp' in environment production", > "Debug: Automatically imported rabbitmq::config from rabbitmq/config into production", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq.config.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq-env.conf.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/inetrc.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/inetrc.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/inetrc.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/inetrc.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/inetrc.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmqadmin.conf.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq-server.service.d/limits.conf", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/limits.conf", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/limits.conf]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/limits.conf in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/limits.conf]: Interpolated template /etc/puppet/modules/rabbitmq/templates/limits.conf in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/service.pp' in environment production", > "Debug: Automatically imported rabbitmq::service from rabbitmq/service into production", > "Debug: hiera(): Looking up rabbitmq::service::service_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service::service_manage in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service::service_name in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/management.pp' in environment production", > "Debug: Automatically imported rabbitmq::management from rabbitmq/management into production", > "Debug: hiera(): Looking up veritas_hyperscale_controller_enabled in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_short_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/bundle.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::bundle from pacemaker/resource/bundle into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/ocf.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::ocf from pacemaker/resource/ocf into production", > "Debug: hiera(): Looking up systemd::service_limits in JSON backend", > "Debug: hiera(): Looking up systemd::manage_resolved in JSON backend", > "Debug: hiera(): Looking up systemd::resolved_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_networkd in JSON backend", > "Debug: hiera(): Looking up systemd::networkd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_timesyncd in JSON backend", > "Debug: hiera(): Looking up systemd::timesyncd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::ntp_server in JSON backend", > "Debug: hiera(): Looking up systemd::fallback_ntp_server in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::bundle::deep_compare in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::ocf::deep_compare in JSON backend", > "Debug: Adding relationship from Service[pcsd] to Exec[auth-successful-across-all-nodes] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[corosync] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[pacemaker] with 'before'", > "Debug: Adding relationship from Service[corosync] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker-authkey] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[rabbitmq] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property--stonith-enabled] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-rabbitmq-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[rabbitmq-bundle] with 'before'", > "Debug: Adding relationship from Class[Pacemaker] to Class[Pacemaker::Corosync] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/resource-agents-deps.target.wants] to Systemd::Unit_file[docker.service] with 'before'", > "Debug: Adding relationship from Systemd::Unit_file[docker.service] to Class[Systemd::Systemctl::Daemon_reload] with 'notify'", > "Debug: Adding relationship from File[/etc/systemd/system/rabbitmq-server.service.d] to File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf] with 'before'", > "Debug: Adding relationship from Class[Rabbitmq::Install] to Class[Rabbitmq::Config] with 'before'", > "Debug: Adding relationship from Class[Rabbitmq::Config] to Class[Rabbitmq::Service] with 'notify'", > "Debug: Adding relationship from Class[Rabbitmq::Service] to Class[Rabbitmq::Management] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.47 seconds", > "Debug: puppet-pacemaker: initialize()", > "Debug: Creating default schedules", > "Info: Applying configuration version '1529921400'", > "Debug: /Stage[main]/Pacemaker/before: subscribes to Class[Pacemaker::Corosync]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Exec[auth-successful-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/before: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/notify: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/notify: subscribes to Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/require: subscribes to User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/require: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[corosync]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[pacemaker]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[rabbitmq]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property--stonith-enabled]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-rabbitmq-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/before: subscribes to Systemd::Unit_file[docker.service]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/before: subscribes to Class[Pacemaker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Rabbitmq::Install/before: subscribes to Class[Rabbitmq::Config]", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/require: subscribes to File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/before: subscribes to File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/notify: subscribes to Exec[rabbitmq-systemd-reload]", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]/before: subscribes to File[rabbitmq.config]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Service/before: subscribes to Class[Rabbitmq::Management]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]/require: subscribes to Class[Rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/require: subscribes to Class[Rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/require: subscribes to Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/before: subscribes to Exec[rabbitmq-ready]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Adding autorequire relationship with File[/etc/systemd/system/resource-agents-deps.target.wants]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Stage[main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Settings]: Resource is being skipped, unscheduling all events", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Install]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching yum resources for package", > "Debug: Executing '/usr/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n''", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Service]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Systemd::Unit_file[docker.service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Stonith]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Property[Disable STONITH]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Resource_defaults]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Base::Rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Config]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}7c5cf5bed5504668815fc0e555e57c66'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: Scheduling refresh of Class[Rabbitmq::Service]", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: Scheduling refresh of Class[Rabbitmq::Service]", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Info: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]: Scheduling refresh of Exec[rabbitmq-systemd-reload]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Unscheduling all events on Exec[rabbitmq-systemd-reload]", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]: Scheduling refresh of Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /etc/rabbitmq/rabbitmq.config", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Filebucketed /etc/rabbitmq/rabbitmq.config to puppet with sum b346ec0a8320f85f795bf612f6b02da7", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}ebe8fcd83a98e09c5651db7925e2dd8b'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Scheduling refresh of Class[Rabbitmq::Service]", > "Info: Class[Rabbitmq::Config]: Unscheduling all events on Class[Rabbitmq::Config]", > "Debug: Class[Rabbitmq::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Service]: Resource is being skipped, unscheduling all events", > "Info: Class[Rabbitmq::Service]: Unscheduling all events on Class[Rabbitmq::Service]", > "Debug: Class[Rabbitmq::Management]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Management]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /var/lib/rabbitmq/.erlang.cookie", > "Info: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]: Filebucketed /var/lib/rabbitmq/.erlang.cookie to puppet with sum 52e72ed887389862eb17c50477267c6e", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]/content: content changed '{md5}52e72ed887389862eb17c50477267c6e' to '{md5}dc0b6d8f05e1f6f2f71072fd26f06d8f'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]: The container Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle] will propagate my refresh event", > "Debug: Pacemaker::Property[rabbitmq-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Property[rabbitmq-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Systemd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/mode: Not managing symlink mode", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: The container Systemd::Unit_file[docker.service] will propagate my refresh event", > "Info: Systemd::Unit_file[docker.service]: Unscheduling all events on Systemd::Unit_file[docker.service]", > "Info: Class[Tripleo::Profile::Base::Pacemaker]: Unscheduling all events on Class[Tripleo::Profile::Base::Pacemaker]", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}050dd67b736b9b417ae97c822e4867ca'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Resource is being skipped, unscheduling all events", > "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Resource is being skipped, unscheduling all events", > "Info: Class[Systemd::Systemctl::Daemon_reload]: Unscheduling all events on Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1rbtoo4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1rbtoo4 property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: property exists: property show | grep stonith-enabled | grep false > /dev/null 2>&1 -> ", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ficfkq returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ficfkq property show | grep rabbitmq-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep rabbitmq-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1bdc2we returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1bdc2we property set --node controller-0 rabbitmq-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1bdc2we diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1bdc2we.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 rabbitmq-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/Pcmk_property[property-controller-0-rabbitmq-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/Pcmk_property[property-controller-0-rabbitmq-role]: The container Pacemaker::Property[rabbitmq-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[rabbitmq-role-controller-0]: Unscheduling all events on Pacemaker::Property[rabbitmq-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1s90q5d returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1s90q5d constraint list | grep location-rabbitmq-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-sm01sj returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-sm01sj resource show rabbitmq-bundle > /dev/null 2>&1", > "Debug: Exists: bundle rabbitmq-bundle exists 1 location exists 1 deep_compare: false", > "Debug: Create: resource exists 1 location exists 1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-20fuxz returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-20fuxz resource bundle create rabbitmq-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest replicas=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=rabbitmq-cfg-files source-dir=/var/lib/kolla/config_files/rabbitmq.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=rabbitmq-cfg-data source-dir=/var/lib/config-data/puppet-generated/rabbitmq/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=rabbitmq-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=rabbitmq-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=rabbitmq-lib source-dir=/var/lib/rabbitmq target-dir=/var/lib/rabbitmq options=rw storage-map id=rabbitmq-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=rabbitmq-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=rabbitmq-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=rabbitmq-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=rabbitmq-log source-dir=/var/log/containers/rabbitmq target-dir=/var/log/rabbitmq options=rw storage-map id=rabbitmq-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3122 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-20fuxz diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-20fuxz.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: location_rule_create: constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-8pimrw returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-8pimrw constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-8pimrw diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-8pimrw.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-s9qeo4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-s9qeo4 resource enable rabbitmq-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-s9qeo4 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-s9qeo4.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Bundle[rabbitmq-bundle]/Pcmk_bundle[rabbitmq-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Bundle[rabbitmq-bundle]/Pcmk_bundle[rabbitmq-bundle]: The container Pacemaker::Resource::Bundle[rabbitmq-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: Pacemaker::Resource::Ocf[rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Resource::Ocf[rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-y6qpog returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-y6qpog constraint list | grep location-rabbitmq-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-cfp40h returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-cfp40h resource show rabbitmq > /dev/null 2>&1", > "Debug: Exists: resource rabbitmq exists 1 location exists 0 resource deep_compare: false", > "Debug: Create: resource exists 1 location exists 0", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-xvqwu3 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-xvqwu3 resource create rabbitmq ocf:heartbeat:rabbitmq-cluster set_policy='ha-all ^(?!amq\\.).* {\"ha-mode\":\"all\"}' meta notify=true container-attribute-target=host op start timeout=200s stop timeout=200s bundle rabbitmq-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-xvqwu3 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-xvqwu3.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/Pcmk_resource[rabbitmq]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/Pcmk_resource[rabbitmq]: The container Pacemaker::Resource::Ocf[rabbitmq] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[rabbitmq]: Unscheduling all events on Pacemaker::Resource::Ocf[rabbitmq]", > "Debug: Exec[rabbitmq-ready](provider=posix): Executing check 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: Executing: 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: Error: Failed to initialize erlang distribution: {{shutdown,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {failed_to_start_child,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: net_kernel,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {'EXIT',nodistribution}}},", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {child,undefined,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: net_sup_dynamic,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {erl_distribution,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: start_link,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: [['rabbitmq-cli-19',", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: shortnames]]},", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: permanent,1000,supervisor,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: [erl_distribution]}}.", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 1/180", > "Debug: Exec[rabbitmq-ready](provider=posix): Executing 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Sleeping for 10 seconds between tries", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 2/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 3/180", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]: The container Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle] will propagate my refresh event", > "Info: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Unscheduling all events on Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[puppet]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[hourly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[daily]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[weekly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[monthly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[never]: Resource is being skipped, unscheduling all events", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Filebucket[puppet]: Resource is being skipped, unscheduling all events", > "Debug: Finishing transaction 34378260", > "Debug: Storing state", > "Info: Creating state file /var/lib/puppet/state/state.yaml", > "Debug: Stored state in 0.01 seconds", > "Notice: Applied catalog in 61.06 seconds", > "Changes:", > " Total: 21", > "Events:", > " Success: 21", > "Resources:", > " Changed: 18", > " Out of sync: 18", > " Skipped: 25", > " Total: 45", > "Time:", > " File line: 0.00", > " File: 0.05", > " Config retrieval: 1.59", > " Last run: 1529921463", > " Pcmk bundle: 16.93", > " Exec: 25.94", > " Total: 62.44", > " Pcmk property: 8.30", > " Pcmk resource: 9.63", > "Version:", > " Config: 1529921400", > " Puppet: 4.8.2", > "Debug: Applying settings catalog for sections main, reporting, metrics", > "Debug: Using settings: adding file resource 'confdir': 'File[/etc/puppet]{:path=>\"/etc/puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'vardir': 'File[/var/lib/puppet]{:path=>\"/var/lib/puppet\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'logdir': 'File[/var/log/puppet]{:path=>\"/var/log/puppet\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'statedir': 'File[/var/lib/puppet/state]{:path=>\"/var/lib/puppet/state\", :mode=>\"1755\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'rundir': 'File[/var/run/puppet]{:path=>\"/var/run/puppet\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'libdir': 'File[/var/lib/puppet/lib]{:path=>\"/var/lib/puppet/lib\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'hiera_config': 'File[/etc/puppet/hiera.yaml]{:path=>\"/etc/puppet/hiera.yaml\", :ensure=>:file, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'preview_outputdir': 'File[/var/lib/puppet/preview]{:path=>\"/var/lib/puppet/preview\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'certdir': 'File[/etc/puppet/ssl/certs]{:path=>\"/etc/puppet/ssl/certs\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'ssldir': 'File[/etc/puppet/ssl]{:path=>\"/etc/puppet/ssl\", :mode=>\"771\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'publickeydir': 'File[/etc/puppet/ssl/public_keys]{:path=>\"/etc/puppet/ssl/public_keys\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'requestdir': 'File[/etc/puppet/ssl/certificate_requests]{:path=>\"/etc/puppet/ssl/certificate_requests\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatekeydir': 'File[/etc/puppet/ssl/private_keys]{:path=>\"/etc/puppet/ssl/private_keys\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatedir': 'File[/etc/puppet/ssl/private]{:path=>\"/etc/puppet/ssl/private\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'pluginfactdest': 'File[/var/lib/puppet/facts.d]{:path=>\"/var/lib/puppet/facts.d\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: /File[/var/lib/puppet/state]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/var/lib/puppet/lib]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/hiera.yaml]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/var/lib/puppet/preview]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/ssl/certs]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/etc/puppet/ssl/public_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/certificate_requests]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/var/lib/puppet/facts.d]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: Finishing transaction 33283480", > "Debug: Received report to process from controller-0.localdomain", > "Debug: Processing report from controller-0.localdomain with processor Puppet::Reports::Store", > "stderr: + STEP=2", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle'", > "+ EXTRA_ARGS=--debug", > "+ '[' -d /tmp/puppet-etc ']'", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ echo '{\"step\": 2}'", > "+ export FACTER_uuid=docker", > "+ FACTER_uuid=docker", > "+ set +e", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle'", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='rabbitmq_nodename', resolution='<anonymous>': undefined method `[]' for nil:NilClass", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rc=2", > "+ set -e", > "+ set +ux", > "Debug: Facter: value for erl_ssl_path is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::database::mysql_bundle from tripleo/profile/pacemaker/database/mysql_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::mysql_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::bind_address in JSON backend", > "Debug: hiera(): Looking up fqdn_internal_api in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::ca_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::cipher_list in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::gcomm_cipher in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::gmcast_listen_addr in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::innodb_flush_log_at_trx_commit in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::sst_tls_cipher in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::sst_tls_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::ipv6 in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::step in JSON backend", > "Debug: hiera(): Looking up mysql_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::certificate_specs in JSON backend", > "Debug: hiera(): Looking up mysql_bind_host in JSON backend", > "Debug: hiera(): Looking up innodb_flush_log_at_trx_commit in JSON backend", > "Debug: hiera(): Looking up mysql_ipv6 in JSON backend", > "Debug: hiera(): Looking up mysql_short_node_names in JSON backend", > "Debug: hiera(): Looking up mysql_node_names in JSON backend", > "Debug: hiera(): Looking up mysql_max_connections in JSON backend", > "Debug: hiera(): Looking up mysql::server::root_password in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::database::mysql from tripleo/profile/base/database/mysql into production", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::bind_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::generate_dropin_file_limit in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::innodb_buffer_pool_size in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::mysql_max_connections in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::step in JSON backend", > "Debug: hiera(): Looking up innodb_buffer_pool_size in JSON backend", > "Debug: hiera(): Looking up enable_galera in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server.pp' in environment production", > "Debug: Automatically imported mysql::server from mysql/server into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/params.pp' in environment production", > "Debug: Automatically imported mysql::params from mysql/params into production", > "Debug: hiera(): Looking up mysql::server::includedir in JSON backend", > "Debug: hiera(): Looking up mysql::server::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::server::install_secret_file in JSON backend", > "Debug: hiera(): Looking up mysql::server::manage_config_file in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_manage in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_name in JSON backend", > "Debug: hiera(): Looking up mysql::server::purge_conf_dir in JSON backend", > "Debug: hiera(): Looking up mysql::server::restart in JSON backend", > "Debug: hiera(): Looking up mysql::server::root_group in JSON backend", > "Debug: hiera(): Looking up mysql::server::mysql_group in JSON backend", > "Debug: hiera(): Looking up mysql::server::service_name in JSON backend", > "Debug: hiera(): Looking up mysql::server::service_provider in JSON backend", > "Debug: hiera(): Looking up mysql::server::users in JSON backend", > "Debug: hiera(): Looking up mysql::server::grants in JSON backend", > "Debug: hiera(): Looking up mysql::server::databases in JSON backend", > "Debug: hiera(): Looking up mysql::server::enabled in JSON backend", > "Debug: hiera(): Looking up mysql::server::manage_service in JSON backend", > "Debug: hiera(): Looking up mysql::server::old_root_password in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/db.pp' in environment production", > "Debug: Automatically imported mysql::db from mysql/db into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/config.pp' in environment production", > "Debug: Automatically imported mysql::server::config from mysql/server/config into production", > "Debug: Scope(Class[Mysql::Server::Config]): Retrieving template mysql/my.cnf.erb", > "Debug: template[/etc/puppet/modules/mysql/templates/my.cnf.erb]: Bound template variables for /etc/puppet/modules/mysql/templates/my.cnf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/mysql/templates/my.cnf.erb]: Interpolated template /etc/puppet/modules/mysql/templates/my.cnf.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/install.pp' in environment production", > "Debug: Automatically imported mysql::server::install from mysql/server/install into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/binarylog.pp' in environment production", > "Debug: Automatically imported mysql::server::binarylog from mysql/server/binarylog into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/installdb.pp' in environment production", > "Debug: Automatically imported mysql::server::installdb from mysql/server/installdb into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/service.pp' in environment production", > "Debug: Automatically imported mysql::server::service from mysql/server/service into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/root_password.pp' in environment production", > "Debug: Automatically imported mysql::server::root_password from mysql/server/root_password into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/providers.pp' in environment production", > "Debug: Automatically imported mysql::server::providers from mysql/server/providers into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/account_security.pp' in environment production", > "Debug: Automatically imported mysql::server::account_security from mysql/server/account_security into production", > "Debug: hiera(): Looking up aodh_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/aodh/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/aodh/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported aodh::db::mysql from aodh/db/mysql into production", > "Debug: hiera(): Looking up aodh::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/aodh/manifests/deps.pp' in environment production", > "Debug: Automatically imported aodh::deps from aodh/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/db.pp' in environment production", > "Debug: Automatically imported oslo::db from oslo/db into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/policy/base.pp' in environment production", > "Debug: Automatically imported openstacklib::policy::base from openstacklib/policy/base into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported openstacklib::db::mysql from openstacklib/db/mysql into production", > "Debug: hiera(): Looking up ceilometer_collector_enabled in JSON backend", > "Debug: hiera(): Looking up cinder_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported cinder::db::mysql from cinder/db/mysql into production", > "Debug: hiera(): Looking up cinder::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/deps.pp' in environment production", > "Debug: Automatically imported cinder::deps from cinder/deps into production", > "Debug: hiera(): Looking up barbican_api_enabled in JSON backend", > "Debug: hiera(): Looking up congress_enabled in JSON backend", > "Debug: hiera(): Looking up designate_api_enabled in JSON backend", > "Debug: hiera(): Looking up glance_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/glance/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/glance/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported glance::db::mysql from glance/db/mysql into production", > "Debug: hiera(): Looking up glance::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/glance/manifests/deps.pp' in environment production", > "Debug: Automatically imported glance::deps from glance/deps into production", > "Debug: hiera(): Looking up gnocchi_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported gnocchi::db::mysql from gnocchi/db/mysql into production", > "Debug: hiera(): Looking up gnocchi::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/deps.pp' in environment production", > "Debug: Automatically imported gnocchi::deps from gnocchi/deps into production", > "Debug: hiera(): Looking up heat_engine_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/heat/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/heat/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported heat::db::mysql from heat/db/mysql into production", > "Debug: hiera(): Looking up heat::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/heat/manifests/deps.pp' in environment production", > "Debug: Automatically imported heat::deps from heat/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/cache.pp' in environment production", > "Debug: Automatically imported oslo::cache from oslo/cache into production", > "Debug: hiera(): Looking up ironic_api_enabled in JSON backend", > "Debug: hiera(): Looking up ironic_inspector_enabled in JSON backend", > "Debug: hiera(): Looking up keystone_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/keystone/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/keystone/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported keystone::db::mysql from keystone/db/mysql into production", > "Debug: hiera(): Looking up keystone::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/keystone/manifests/deps.pp' in environment production", > "Debug: Automatically imported keystone::deps from keystone/deps into production", > "Debug: hiera(): Looking up manila_api_enabled in JSON backend", > "Debug: hiera(): Looking up mistral_api_enabled in JSON backend", > "Debug: hiera(): Looking up neutron_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/neutron/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/neutron/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported neutron::db::mysql from neutron/db/mysql into production", > "Debug: hiera(): Looking up neutron::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/neutron/manifests/deps.pp' in environment production", > "Debug: Automatically imported neutron::deps from neutron/deps into production", > "Debug: hiera(): Looking up nova_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported nova::db::mysql from nova/db/mysql into production", > "Debug: hiera(): Looking up nova::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::setup_cell0 in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/deps.pp' in environment production", > "Debug: Automatically imported nova::deps from nova/deps into production", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql_api.pp' in environment production", > "Debug: Automatically imported nova::db::mysql_api from nova/db/mysql_api into production", > "Debug: hiera(): Looking up nova::db::mysql_api::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up nova_placement_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql_placement.pp' in environment production", > "Debug: Automatically imported nova::db::mysql_placement from nova/db/mysql_placement into production", > "Debug: hiera(): Looking up nova::db::mysql_placement::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up octavia_api_enabled in JSON backend", > "Debug: hiera(): Looking up sahara_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/sahara/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/sahara/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported sahara::db::mysql from sahara/db/mysql into production", > "Debug: hiera(): Looking up sahara::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/sahara/manifests/deps.pp' in environment production", > "Debug: Automatically imported sahara::deps from sahara/deps into production", > "Debug: hiera(): Looking up tacker_enabled in JSON backend", > "Debug: hiera(): Looking up trove_api_enabled in JSON backend", > "Debug: hiera(): Looking up panko_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/panko/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/panko/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported panko::db::mysql from panko/db/mysql into production", > "Debug: hiera(): Looking up panko::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/panko/manifests/deps.pp' in environment production", > "Debug: Automatically imported panko::deps from panko/deps into production", > "Debug: hiera(): Looking up ec2_api_enabled in JSON backend", > "Debug: hiera(): Looking up zaqar_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/client.pp' in environment production", > "Debug: Automatically imported mysql::client from mysql/client into production", > "Debug: hiera(): Looking up mysql::client::bindings_enable in JSON backend", > "Debug: hiera(): Looking up mysql::client::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_manage in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_name in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/client/install.pp' in environment production", > "Debug: Automatically imported mysql::client::install from mysql/client/install into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp' in environment production", > "Debug: Automatically imported openstacklib::db::mysql::host_access from openstacklib/db/mysql/host_access into production", > "Debug: template[inline]: Interpolated template inline template in 0.04 seconds", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[galera] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-galera-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[galera-bundle] with 'before'", > "Debug: Adding relationship from Anchor[mysql::server::start] to Class[Mysql::Server::Install] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Install] to Class[Mysql::Server::Config] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Config] to Class[Mysql::Server::Binarylog] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Binarylog] to Class[Mysql::Server::Installdb] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Installdb] to Class[Mysql::Server::Service] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Service] to Class[Mysql::Server::Root_password] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Root_password] to Class[Mysql::Server::Providers] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Providers] to Anchor[mysql::server::end] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from Anchor[aodh::install::end] to Anchor[aodh::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[aodh::config::end] to Anchor[aodh::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[aodh::db::begin] to Anchor[aodh::db::end] with 'before'", > "Debug: Adding relationship from Anchor[aodh::db::end] to Anchor[aodh::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::dbsync::begin] to Anchor[aodh::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[aodh::dbsync::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::install::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::config::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::db::begin] to Class[Aodh::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Aodh::Db::Mysql] to Anchor[aodh::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Anchor[cinder::db::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::end] to Anchor[cinder::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::dbsync::begin] to Anchor[cinder::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::dbsync::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Class[Cinder::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Cinder::Db::Mysql] to Anchor[cinder::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[glance::install::end] to Anchor[glance::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[glance::config::end] to Anchor[glance::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[glance::db::begin] to Anchor[glance::db::end] with 'before'", > "Debug: Adding relationship from Anchor[glance::db::end] to Anchor[glance::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::dbsync::begin] to Anchor[glance::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[glance::dbsync::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::install::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::config::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::db::begin] to Class[Glance::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Glance::Db::Mysql] to Anchor[glance::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::install::end] to Anchor[gnocchi::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::config::end] to Anchor[gnocchi::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::db::begin] to Anchor[gnocchi::db::end] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::db::end] to Anchor[gnocchi::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::dbsync::begin] to Anchor[gnocchi::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::dbsync::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::install::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::config::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::db::begin] to Class[Gnocchi::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Gnocchi::Db::Mysql] to Anchor[gnocchi::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[heat::install::end] to Anchor[heat::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[heat::config::end] to Anchor[heat::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[heat::db::begin] to Anchor[heat::db::end] with 'before'", > "Debug: Adding relationship from Anchor[heat::db::end] to Anchor[heat::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::dbsync::begin] to Anchor[heat::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[heat::dbsync::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::install::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::config::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::db::begin] to Class[Heat::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Heat::Db::Mysql] to Anchor[heat::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::install::end] to Anchor[keystone::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[keystone::config::end] to Anchor[keystone::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[keystone::db::begin] to Anchor[keystone::db::end] with 'before'", > "Debug: Adding relationship from Anchor[keystone::db::end] to Anchor[keystone::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::dbsync::begin] to Anchor[keystone::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[keystone::dbsync::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::install::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::config::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::db::begin] to Class[Keystone::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Keystone::Db::Mysql] to Anchor[keystone::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::install::end] to Anchor[neutron::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[neutron::config::end] to Anchor[neutron::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[neutron::db::begin] to Anchor[neutron::db::end] with 'before'", > "Debug: Adding relationship from Anchor[neutron::db::end] to Anchor[neutron::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::dbsync::begin] to Anchor[neutron::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[neutron::dbsync::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::install::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::config::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::db::begin] to Class[Neutron::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Neutron::Db::Mysql] to Anchor[neutron::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::install::end] to Anchor[nova::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[nova::config::end] to Anchor[nova::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Anchor[nova::db::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::install::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::config::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::dbsync_api::begin] to Anchor[nova::dbsync_api::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::dbsync::begin] to Anchor[nova::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::cell_v2::begin] to Anchor[nova::cell_v2::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db_online_data_migrations::begin] to Anchor[nova::db_online_data_migrations::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql_api] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql_api] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql_placement] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql_placement] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::install::end] to Anchor[sahara::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[sahara::config::end] to Anchor[sahara::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[sahara::db::begin] to Anchor[sahara::db::end] with 'before'", > "Debug: Adding relationship from Anchor[sahara::db::end] to Anchor[sahara::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::dbsync::begin] to Anchor[sahara::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[sahara::dbsync::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::install::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::config::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::db::begin] to Class[Sahara::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Sahara::Db::Mysql] to Anchor[sahara::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[panko::install::end] to Anchor[panko::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[panko::config::end] to Anchor[panko::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[panko::db::begin] to Anchor[panko::db::end] with 'before'", > "Debug: Adding relationship from Anchor[panko::db::end] to Anchor[panko::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::dbsync::begin] to Anchor[panko::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[panko::dbsync::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::install::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::config::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::db::begin] to Class[Panko::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Panko::Db::Mysql] to Anchor[panko::db::end] with 'notify'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@172.17.1.12/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@172.17.1.15/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@172.17.1.12/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@172.17.1.15/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@172.17.1.12/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@172.17.1.15/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@172.17.1.12/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@172.17.1.12/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@172.17.1.15/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@172.17.1.12/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@172.17.1.15/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@172.17.1.12/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.12/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.15/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.12/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.15/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@172.17.1.12/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@172.17.1.15/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@172.17.1.12/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@172.17.1.12/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@172.17.1.15/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@172.17.1.12/panko.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@172.17.1.15/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@172.17.1.12] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@172.17.1.12/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@172.17.1.15/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@172.17.1.12/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@172.17.1.15/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@172.17.1.12/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@172.17.1.15/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@172.17.1.12/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@172.17.1.12/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@172.17.1.15/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@172.17.1.12/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@172.17.1.15/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@172.17.1.12/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.12/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.15/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.12/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.15/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@172.17.1.12/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@172.17.1.15/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@172.17.1.12/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@172.17.1.12/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@172.17.1.15/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@172.17.1.12/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@172.17.1.15/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@172.17.1.12] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@172.17.1.12/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@172.17.1.15/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@172.17.1.12/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@172.17.1.15/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@172.17.1.12/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@172.17.1.15/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@172.17.1.12/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@172.17.1.12/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@172.17.1.15/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@172.17.1.12/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@172.17.1.15/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@172.17.1.12/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.12/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.15/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.12/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.15/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@172.17.1.12/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@172.17.1.15/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@172.17.1.12/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@172.17.1.12/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@172.17.1.15/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@172.17.1.12/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@172.17.1.15/panko.*] with 'before'", > "Debug: Adding relationship from Anchor[mysql::client::start] to Class[Mysql::Client::Install] with 'before'", > "Debug: Adding relationship from Class[Mysql::Client::Install] to Anchor[mysql::client::end] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[aodh] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[aodh] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[cinder] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[cinder] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[glance] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[glance] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[gnocchi] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[gnocchi] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[heat] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[heat] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[keystone] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[keystone] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[ovs_neutron] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[ovs_neutron] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_cell0] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_cell0] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_api] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_api] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_placement] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_placement] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[sahara] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[sahara] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[panko] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[panko] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@%] to Mysql_grant[aodh@%/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@172.17.1.12] to Mysql_grant[aodh@172.17.1.12/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@172.17.1.15] to Mysql_grant[aodh@172.17.1.15/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@%] to Mysql_grant[cinder@%/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@172.17.1.12] to Mysql_grant[cinder@172.17.1.12/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@172.17.1.15] to Mysql_grant[cinder@172.17.1.15/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@%] to Mysql_grant[glance@%/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@172.17.1.12] to Mysql_grant[glance@172.17.1.12/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@172.17.1.15] to Mysql_grant[glance@172.17.1.15/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@%] to Mysql_grant[gnocchi@%/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@172.17.1.12] to Mysql_grant[gnocchi@172.17.1.12/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@172.17.1.15] to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@%] to Mysql_grant[heat@%/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@172.17.1.12] to Mysql_grant[heat@172.17.1.12/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@172.17.1.15] to Mysql_grant[heat@172.17.1.15/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@%] to Mysql_grant[keystone@%/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@172.17.1.12] to Mysql_grant[keystone@172.17.1.12/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@172.17.1.15] to Mysql_grant[keystone@172.17.1.15/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@%] to Mysql_grant[neutron@%/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@172.17.1.12] to Mysql_grant[neutron@172.17.1.12/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@172.17.1.15] to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@%] to Mysql_grant[nova@%/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.12] to Mysql_grant[nova@172.17.1.12/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.15] to Mysql_grant[nova@172.17.1.15/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@%] to Mysql_grant[nova@%/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.12] to Mysql_grant[nova@172.17.1.12/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.15] to Mysql_grant[nova@172.17.1.15/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@%] to Mysql_grant[nova_api@%/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@172.17.1.12] to Mysql_grant[nova_api@172.17.1.12/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@172.17.1.15] to Mysql_grant[nova_api@172.17.1.15/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@%] to Mysql_grant[nova_placement@%/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@172.17.1.12] to Mysql_grant[nova_placement@172.17.1.12/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@172.17.1.15] to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@%] to Mysql_grant[sahara@%/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@172.17.1.12] to Mysql_grant[sahara@172.17.1.12/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@172.17.1.15] to Mysql_grant[sahara@172.17.1.15/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@%] to Mysql_grant[panko@%/panko.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@172.17.1.12] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@172.17.1.12] to Mysql_grant[panko@172.17.1.12/panko.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@172.17.1.15] to Mysql_grant[panko@172.17.1.15/panko.*] with 'notify'", > "Debug: File[mysql-config-file]: Adding default for owner", > "Debug: File[mysql-config-file]: Adding default for group", > "Debug: File[/etc/my.cnf.d]: Adding default for owner", > "Debug: File[/etc/my.cnf.d]: Adding default for group", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.96 seconds", > "Info: Applying configuration version '1529921468'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[galera]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-galera-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@172.17.1.12/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@172.17.1.15/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@172.17.1.12/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@172.17.1.15/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@172.17.1.12/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@172.17.1.15/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@172.17.1.12/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@172.17.1.12/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@172.17.1.15/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@172.17.1.12/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@172.17.1.15/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@172.17.1.12/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.12/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.12/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@172.17.1.12/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@172.17.1.15/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@172.17.1.12/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@172.17.1.12/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@172.17.1.15/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@172.17.1.12/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@172.17.1.15/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@172.17.1.12/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@172.17.1.15/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@172.17.1.12/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@172.17.1.15/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@172.17.1.12/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@172.17.1.15/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@172.17.1.12/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@172.17.1.12/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@172.17.1.15/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@172.17.1.12/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@172.17.1.15/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@172.17.1.12/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.12/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.12/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@172.17.1.12/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@172.17.1.15/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@172.17.1.12/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@172.17.1.12/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@172.17.1.15/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@172.17.1.12/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@172.17.1.15/panko.*]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Server::Config/before: subscribes to Class[Mysql::Server::Binarylog]", > "Debug: /Stage[main]/Mysql::Server::Install/before: subscribes to Class[Mysql::Server::Config]", > "Debug: /Stage[main]/Mysql::Server::Binarylog/before: subscribes to Class[Mysql::Server::Installdb]", > "Debug: /Stage[main]/Mysql::Server::Installdb/before: subscribes to Class[Mysql::Server::Service]", > "Debug: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/require: subscribes to Mysql_datadir[/var/lib/mysql]", > "Debug: /Stage[main]/Mysql::Server::Service/before: subscribes to Class[Mysql::Server::Root_password]", > "Debug: /Stage[main]/Mysql::Server::Root_password/before: subscribes to Class[Mysql::Server::Providers]", > "Debug: /Stage[main]/Mysql::Server::Providers/before: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@%]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@localhost.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]/before: subscribes to Class[Mysql::Server::Install]", > "Debug: /Stage[main]/Aodh::Db::Mysql/notify: subscribes to Anchor[aodh::db::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]/before: subscribes to Anchor[aodh::config::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]/before: subscribes to Anchor[aodh::db::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]/before: subscribes to Anchor[aodh::db::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]/notify: subscribes to Class[Aodh::Db::Mysql]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]/notify: subscribes to Anchor[aodh::dbsync::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]/before: subscribes to Anchor[aodh::dbsync::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Cinder::Db::Mysql/notify: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/before: subscribes to Anchor[cinder::config::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/before: subscribes to Anchor[cinder::db::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/before: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/notify: subscribes to Class[Cinder::Db::Mysql]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]/notify: subscribes to Anchor[cinder::dbsync::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]/before: subscribes to Anchor[cinder::dbsync::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Glance::Db::Mysql/notify: subscribes to Anchor[glance::db::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]/before: subscribes to Anchor[glance::config::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]/before: subscribes to Anchor[glance::db::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]/before: subscribes to Anchor[glance::db::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]/notify: subscribes to Class[Glance::Db::Mysql]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]/notify: subscribes to Anchor[glance::dbsync::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]/before: subscribes to Anchor[glance::dbsync::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/notify: subscribes to Anchor[gnocchi::db::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]/before: subscribes to Anchor[gnocchi::config::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]/before: subscribes to Anchor[gnocchi::db::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]/before: subscribes to Anchor[gnocchi::db::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]/notify: subscribes to Class[Gnocchi::Db::Mysql]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]/notify: subscribes to Anchor[gnocchi::dbsync::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]/before: subscribes to Anchor[gnocchi::dbsync::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Heat::Db::Mysql/notify: subscribes to Anchor[heat::db::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]/before: subscribes to Anchor[heat::config::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]/before: subscribes to Anchor[heat::db::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]/before: subscribes to Anchor[heat::db::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]/notify: subscribes to Class[Heat::Db::Mysql]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]/notify: subscribes to Anchor[heat::dbsync::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]/before: subscribes to Anchor[heat::dbsync::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Keystone::Db::Mysql/notify: subscribes to Anchor[keystone::db::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]/before: subscribes to Anchor[keystone::config::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]/before: subscribes to Anchor[keystone::db::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]/before: subscribes to Anchor[keystone::db::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]/notify: subscribes to Class[Keystone::Db::Mysql]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]/notify: subscribes to Anchor[keystone::dbsync::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]/before: subscribes to Anchor[keystone::dbsync::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Neutron::Db::Mysql/notify: subscribes to Anchor[neutron::db::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]/before: subscribes to Anchor[neutron::config::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]/before: subscribes to Anchor[neutron::db::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]/before: subscribes to Anchor[neutron::db::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]/notify: subscribes to Class[Neutron::Db::Mysql]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]/notify: subscribes to Anchor[neutron::dbsync::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]/before: subscribes to Anchor[neutron::dbsync::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Nova::Db::Mysql/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]/before: subscribes to Anchor[nova::config::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]/before: subscribes to Anchor[nova::db::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/before: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql_api]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql_placement]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]/subscribe: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]/before: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/subscribe: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/before: subscribes to Anchor[nova::dbsync::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]/notify: subscribes to Anchor[nova::cell_v2::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]/notify: subscribes to Anchor[nova::dbsync::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]/before: subscribes to Anchor[nova::db_online_data_migrations::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Sahara::Db::Mysql/notify: subscribes to Anchor[sahara::db::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]/before: subscribes to Anchor[sahara::config::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]/before: subscribes to Anchor[sahara::db::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]/before: subscribes to Anchor[sahara::db::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]/notify: subscribes to Class[Sahara::Db::Mysql]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]/notify: subscribes to Anchor[sahara::dbsync::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]/before: subscribes to Anchor[sahara::dbsync::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Panko::Db::Mysql/notify: subscribes to Anchor[panko::db::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]/before: subscribes to Anchor[panko::config::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]/before: subscribes to Anchor[panko::db::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]/before: subscribes to Anchor[panko::db::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]/notify: subscribes to Class[Panko::Db::Mysql]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]/notify: subscribes to Anchor[panko::dbsync::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]/before: subscribes to Anchor[panko::dbsync::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/require: subscribes to Class[Mysql::Server]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/require: subscribes to Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/before: subscribes to Exec[galera-ready]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@172.17.1.12]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@172.17.1.12/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@172.17.1.15/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@172.17.1.12/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@172.17.1.15/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@172.17.1.12/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@172.17.1.15/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@172.17.1.12/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@172.17.1.12/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@172.17.1.15/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@172.17.1.12/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@172.17.1.15/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@172.17.1.12/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.12/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.12/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@172.17.1.12/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@172.17.1.15/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@172.17.1.12/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@172.17.1.12/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@172.17.1.15/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@172.17.1.12/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@172.17.1.15/panko.*]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Client::Install/before: subscribes to Anchor[mysql::client::end]", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]/before: subscribes to Class[Mysql::Client::Install]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@172.17.1.12]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@172.17.1.15]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@172.17.1.12]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@172.17.1.15]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@172.17.1.12]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@172.17.1.15]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@172.17.1.12]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@172.17.1.15]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@172.17.1.12]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@172.17.1.15]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@172.17.1.12]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@172.17.1.15]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@172.17.1.12]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@172.17.1.15]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@172.17.1.12]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@172.17.1.15]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@172.17.1.12]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@172.17.1.15]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@172.17.1.12]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@172.17.1.15]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@172.17.1.12]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@172.17.1.15]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@172.17.1.12]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@172.17.1.15]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]/notify: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12]/Mysql_user[aodh@172.17.1.12]/notify: subscribes to Mysql_grant[aodh@172.17.1.12/aodh.*]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_user[aodh@172.17.1.15]/notify: subscribes to Mysql_grant[aodh@172.17.1.15/aodh.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]/notify: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12]/Mysql_user[cinder@172.17.1.12]/notify: subscribes to Mysql_grant[cinder@172.17.1.12/cinder.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_user[cinder@172.17.1.15]/notify: subscribes to Mysql_grant[cinder@172.17.1.15/cinder.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/notify: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12]/Mysql_user[glance@172.17.1.12]/notify: subscribes to Mysql_grant[glance@172.17.1.12/glance.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_user[glance@172.17.1.15]/notify: subscribes to Mysql_grant[glance@172.17.1.15/glance.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]/notify: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12]/Mysql_user[gnocchi@172.17.1.12]/notify: subscribes to Mysql_grant[gnocchi@172.17.1.12/gnocchi.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_user[gnocchi@172.17.1.15]/notify: subscribes to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/notify: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12]/Mysql_user[heat@172.17.1.12]/notify: subscribes to Mysql_grant[heat@172.17.1.12/heat.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_user[heat@172.17.1.15]/notify: subscribes to Mysql_grant[heat@172.17.1.15/heat.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/notify: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12]/Mysql_user[keystone@172.17.1.12]/notify: subscribes to Mysql_grant[keystone@172.17.1.12/keystone.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_user[keystone@172.17.1.15]/notify: subscribes to Mysql_grant[keystone@172.17.1.15/keystone.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]/notify: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12]/Mysql_user[neutron@172.17.1.12]/notify: subscribes to Mysql_grant[neutron@172.17.1.12/ovs_neutron.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_user[neutron@172.17.1.15]/notify: subscribes to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/notify: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/notify: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]/Mysql_user[nova@172.17.1.12]/notify: subscribes to Mysql_grant[nova@172.17.1.12/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]/Mysql_user[nova@172.17.1.12]/notify: subscribes to Mysql_grant[nova@172.17.1.12/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_user[nova@172.17.1.15]/notify: subscribes to Mysql_grant[nova@172.17.1.15/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_user[nova@172.17.1.15]/notify: subscribes to Mysql_grant[nova@172.17.1.15/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/notify: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12]/Mysql_user[nova_api@172.17.1.12]/notify: subscribes to Mysql_grant[nova_api@172.17.1.12/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_user[nova_api@172.17.1.15]/notify: subscribes to Mysql_grant[nova_api@172.17.1.15/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]/notify: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12]/Mysql_user[nova_placement@172.17.1.12]/notify: subscribes to Mysql_grant[nova_placement@172.17.1.12/nova_placement.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_user[nova_placement@172.17.1.15]/notify: subscribes to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]/notify: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12]/Mysql_user[sahara@172.17.1.12]/notify: subscribes to Mysql_grant[sahara@172.17.1.12/sahara.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_user[sahara@172.17.1.15]/notify: subscribes to Mysql_grant[sahara@172.17.1.15/sahara.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]/notify: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12]/Mysql_user[panko@172.17.1.12]/notify: subscribes to Mysql_grant[panko@172.17.1.12/panko.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_user[panko@172.17.1.15]/notify: subscribes to Mysql_grant[panko@172.17.1.15/panko.*]", > "Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Adding autorequire relationship with File[/etc/my.cnf.d]", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Adding autorequire relationship with Package[mysql-server]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}69e032b0df155d12050294dfc6f40434'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}81f3beaae0e1273fed025adfd903c277'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Install/Package[mysql-server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Install/Package[mysql-server]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Config]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /etc/my.cnf.d/galera.cnf", > "Info: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Filebucketed /etc/my.cnf.d/galera.cnf to puppet with sum af90358207ccfecae7af249d5ef7dd3e", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}e673fa5f672212d789e5a45ebfc5b712'", > "Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: The container Class[Mysql::Server::Config] will propagate my refresh event", > "Info: Class[Mysql::Server::Config]: Unscheduling all events on Class[Mysql::Server::Config]", > "Debug: Class[Mysql::Server::Binarylog]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Binarylog]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Installdb]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Installdb]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Debug: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]: The container Class[Mysql::Server::Installdb] will propagate my refresh event", > "Info: Class[Mysql::Server::Installdb]: Unscheduling all events on Class[Mysql::Server::Installdb]", > "Debug: Class[Mysql::Server::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Root_password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Root_password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Root_password/Exec[remove install pass]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Root_password/Exec[remove install pass]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Providers]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Providers]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Account_security]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Account_security]: Resource is being skipped, unscheduling all events", > "Debug: Class[Aodh::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Aodh::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Aodh::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Aodh::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[aodh]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Cinder::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Cinder::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Class[Glance::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Glance::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Glance::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Glance::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[glance]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[glance]: Resource is being skipped, unscheduling all events", > "Debug: Class[Gnocchi::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Gnocchi::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Gnocchi::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Gnocchi::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[gnocchi]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Class[Heat::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Heat::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Heat::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Heat::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[heat]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[heat]: Resource is being skipped, unscheduling all events", > "Debug: Class[Keystone::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Keystone::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Keystone::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Keystone::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[keystone]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Class[Neutron::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Neutron::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Neutron::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Neutron::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[neutron]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_cell0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_cell0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql_api]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql_placement]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql_placement]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_placement]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Class[Sahara::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Sahara::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Sahara::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Sahara::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[sahara]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Class[Panko::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Panko::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Panko::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Panko::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[panko]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[panko]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[galera-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Property[galera-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-lvwac2 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-lvwac2 property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: Class[Mysql::Client]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Client]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Client::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Client::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client::Install/Package[mysql_client]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client::Install/Package[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ne2s11 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ne2s11 property show | grep galera-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep galera-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1pu504o returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1pu504o property set --node controller-0 galera-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1pu504o diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1pu504o.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 galera-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/Pcmk_property[property-controller-0-galera-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/Pcmk_property[property-controller-0-galera-role]: The container Pacemaker::Property[galera-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[galera-role-controller-0]: Unscheduling all events on Pacemaker::Property[galera-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[galera-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Resource::Bundle[galera-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hn8m02 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hn8m02 constraint list | grep location-galera-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1uhycob returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1uhycob resource show galera-bundle > /dev/null 2>&1", > "Debug: Exists: bundle galera-bundle exists 1 location exists 1 deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-uiyyai returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-uiyyai resource bundle create galera-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest replicas=1 masters=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=mysql-cfg-files source-dir=/var/lib/kolla/config_files/mysql.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=mysql-cfg-data source-dir=/var/lib/config-data/puppet-generated/mysql/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=mysql-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=mysql-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=mysql-lib source-dir=/var/lib/mysql target-dir=/var/lib/mysql options=rw storage-map id=mysql-log-mariadb source-dir=/var/log/mariadb target-dir=/var/log/mariadb options=rw storage-map id=mysql-log source-dir=/var/log/containers/mysql target-dir=/var/log/mysql options=rw storage-map id=mysql-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3123 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-uiyyai diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-uiyyai.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: location_rule_create: constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1h40kp3 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1h40kp3 constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1h40kp3 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1h40kp3.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-gmmlrs returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-gmmlrs resource enable galera-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-gmmlrs diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-gmmlrs.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Bundle[galera-bundle]/Pcmk_bundle[galera-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Bundle[galera-bundle]/Pcmk_bundle[galera-bundle]: The container Pacemaker::Resource::Bundle[galera-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[galera-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: Pacemaker::Resource::Ocf[galera]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Resource::Ocf[galera]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-fhr4ig returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-fhr4ig constraint list | grep location-galera-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-2eqn36 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-2eqn36 resource show galera > /dev/null 2>&1", > "Debug: Exists: resource galera exists 1 location exists 0 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1mej9iw returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1mej9iw resource create galera ocf:heartbeat:galera log='/var/log/mysql/mysqld.log' additional_parameters='--open-files-limit=16384' enable_creation=true wsrep_cluster_address='gcomm://controller-0.internalapi.localdomain' cluster_host_map='controller-0:controller-0.internalapi.localdomain' meta master-max=1 ordered=true container-attribute-target=host op promote timeout=300s on-fail=block bundle galera-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1mej9iw diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1mej9iw.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/Pcmk_resource[galera]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/Pcmk_resource[galera]: The container Pacemaker::Resource::Ocf[galera] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[galera]: Unscheduling all events on Pacemaker::Resource::Ocf[galera]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 1/180", > "Debug: Exec[galera-ready](provider=posix): Executing '/usr/bin/clustercheck >/dev/null'", > "Debug: Executing: '/usr/bin/clustercheck >/dev/null'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Sleeping for 10 seconds between tries", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 2/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 3/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 4/180", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Info: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Unscheduling all events on Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]", > "Debug: Prefetching mysql resources for mysql_user", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@%''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@127.0.0.1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@::1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@controller-0''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'clustercheck@localhost''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@localhost''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'127.0.0.1''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'::1''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@%]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'controller-0''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: Prefetching mysql resources for mysql_database", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show databases'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' information_schema'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' mysql'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' performance_schema'", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]: Nothing to manage: no ensure and the resource doesn't exist", > "Info: Class[Mysql::Server::Account_security]: Unscheduling all events on Class[Mysql::Server::Account_security]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `aodh` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]: The container Openstacklib::Db::Mysql[aodh] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `cinder` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]: The container Openstacklib::Db::Mysql[cinder] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `glance` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]: The container Openstacklib::Db::Mysql[glance] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `gnocchi` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]: The container Openstacklib::Db::Mysql[gnocchi] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `heat` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]: The container Openstacklib::Db::Mysql[heat] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `keystone` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]: The container Openstacklib::Db::Mysql[keystone] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `ovs_neutron` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]: The container Openstacklib::Db::Mysql[neutron] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]: The container Openstacklib::Db::Mysql[nova] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_cell0` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]: The container Openstacklib::Db::Mysql[nova_cell0] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_api` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]: The container Openstacklib::Db::Mysql[nova_api] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_placement` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]: The container Openstacklib::Db::Mysql[nova_placement] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `sahara` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]: The container Openstacklib::Db::Mysql[sahara] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `panko` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]: The container Openstacklib::Db::Mysql[panko] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'%' IDENTIFIED BY PASSWORD '*0FBDB4543FEE3CA3F2F6B87C6C218D7CA27E97BB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]: The container Openstacklib::Db::Mysql::Host_access[aodh_%] will propagate my refresh event", > "Debug: Prefetching mysql resources for mysql_grant", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'aodh'@'%';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'root'@'%';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'clustercheck'@'localhost';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'root'@'localhost';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'%''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]/ensure: created", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe FLUSH PRIVILEGES'", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'172.17.1.12' IDENTIFIED BY PASSWORD '*0FBDB4543FEE3CA3F2F6B87C6C218D7CA27E97BB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12]/Mysql_user[aodh@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12]/Mysql_user[aodh@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'172.17.1.12''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12]/Mysql_grant[aodh@172.17.1.12/aodh.*]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12]/Mysql_grant[aodh@172.17.1.12/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'172.17.1.15' IDENTIFIED BY PASSWORD '*0FBDB4543FEE3CA3F2F6B87C6C218D7CA27E97BB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_user[aodh@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_user[aodh@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'172.17.1.15''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_grant[aodh@172.17.1.15/aodh.*]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_grant[aodh@172.17.1.15/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[aodh]: Unscheduling all events on Openstacklib::Db::Mysql[aodh]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'%' IDENTIFIED BY PASSWORD '*3EAD039016527F0932C91E3DE0B68F0913C1E5BB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]: The container Openstacklib::Db::Mysql::Host_access[cinder_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'%''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_grant[cinder@%/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_grant[cinder@%/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'172.17.1.12' IDENTIFIED BY PASSWORD '*3EAD039016527F0932C91E3DE0B68F0913C1E5BB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12]/Mysql_user[cinder@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12]/Mysql_user[cinder@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'172.17.1.12''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12]/Mysql_grant[cinder@172.17.1.12/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12]/Mysql_grant[cinder@172.17.1.12/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'172.17.1.15' IDENTIFIED BY PASSWORD '*3EAD039016527F0932C91E3DE0B68F0913C1E5BB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_user[cinder@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_user[cinder@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'172.17.1.15''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_grant[cinder@172.17.1.15/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_grant[cinder@172.17.1.15/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[cinder]: Unscheduling all events on Openstacklib::Db::Mysql[cinder]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'%' IDENTIFIED BY PASSWORD '*023EA0A9C3F9C6BF1556ECC09A7F27F266643904''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]: The container Openstacklib::Db::Mysql::Host_access[glance_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'%''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_grant[glance@%/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_grant[glance@%/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'172.17.1.12' IDENTIFIED BY PASSWORD '*023EA0A9C3F9C6BF1556ECC09A7F27F266643904''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12]/Mysql_user[glance@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12]/Mysql_user[glance@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'172.17.1.12''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12]/Mysql_grant[glance@172.17.1.12/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12]/Mysql_grant[glance@172.17.1.12/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'172.17.1.15' IDENTIFIED BY PASSWORD '*023EA0A9C3F9C6BF1556ECC09A7F27F266643904''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_user[glance@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_user[glance@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'172.17.1.15''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_grant[glance@172.17.1.15/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_grant[glance@172.17.1.15/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[glance]: Unscheduling all events on Openstacklib::Db::Mysql[glance]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'%' IDENTIFIED BY PASSWORD '*0B5B7548C61BAC10E6A7AE5DE9FEBD93252CAE65''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'%''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_grant[gnocchi@%/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_grant[gnocchi@%/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'172.17.1.12' IDENTIFIED BY PASSWORD '*0B5B7548C61BAC10E6A7AE5DE9FEBD93252CAE65''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12]/Mysql_user[gnocchi@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12]/Mysql_user[gnocchi@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'172.17.1.12''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12]/Mysql_grant[gnocchi@172.17.1.12/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12]/Mysql_grant[gnocchi@172.17.1.12/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'172.17.1.15' IDENTIFIED BY PASSWORD '*0B5B7548C61BAC10E6A7AE5DE9FEBD93252CAE65''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_user[gnocchi@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_user[gnocchi@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'172.17.1.15''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[gnocchi]: Unscheduling all events on Openstacklib::Db::Mysql[gnocchi]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'%' IDENTIFIED BY PASSWORD '*1FFDC78D63460E562DC2119EE8CE11AEC01A00EE''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]: The container Openstacklib::Db::Mysql::Host_access[heat_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'%''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'172.17.1.12' IDENTIFIED BY PASSWORD '*1FFDC78D63460E562DC2119EE8CE11AEC01A00EE''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12]/Mysql_user[heat@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12]/Mysql_user[heat@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'172.17.1.12''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12]/Mysql_grant[heat@172.17.1.12/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12]/Mysql_grant[heat@172.17.1.12/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'172.17.1.15' IDENTIFIED BY PASSWORD '*1FFDC78D63460E562DC2119EE8CE11AEC01A00EE''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_user[heat@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_user[heat@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'172.17.1.15''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_grant[heat@172.17.1.15/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_grant[heat@172.17.1.15/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[heat]: Unscheduling all events on Openstacklib::Db::Mysql[heat]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'%' IDENTIFIED BY PASSWORD '*0193168B8F7AC575C175EDB6D7F264EE5C7F6ACB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]: The container Openstacklib::Db::Mysql::Host_access[keystone_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'%''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_grant[keystone@%/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_grant[keystone@%/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'172.17.1.12' IDENTIFIED BY PASSWORD '*0193168B8F7AC575C175EDB6D7F264EE5C7F6ACB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12]/Mysql_user[keystone@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12]/Mysql_user[keystone@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'172.17.1.12''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12]/Mysql_grant[keystone@172.17.1.12/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12]/Mysql_grant[keystone@172.17.1.12/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'172.17.1.15' IDENTIFIED BY PASSWORD '*0193168B8F7AC575C175EDB6D7F264EE5C7F6ACB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_user[keystone@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_user[keystone@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'172.17.1.15''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_grant[keystone@172.17.1.15/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_grant[keystone@172.17.1.15/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[keystone]: Unscheduling all events on Openstacklib::Db::Mysql[keystone]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'%' IDENTIFIED BY PASSWORD '*D18F5280C3CAAD58D8093C7B110A89BAC0B2DD06''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'%''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_grant[neutron@%/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_grant[neutron@%/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'172.17.1.12' IDENTIFIED BY PASSWORD '*D18F5280C3CAAD58D8093C7B110A89BAC0B2DD06''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12]/Mysql_user[neutron@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12]/Mysql_user[neutron@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'172.17.1.12''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12]/Mysql_grant[neutron@172.17.1.12/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12]/Mysql_grant[neutron@172.17.1.12/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'172.17.1.15' IDENTIFIED BY PASSWORD '*D18F5280C3CAAD58D8093C7B110A89BAC0B2DD06''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_user[neutron@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_user[neutron@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'172.17.1.15''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[neutron]: Unscheduling all events on Openstacklib::Db::Mysql[neutron]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'%' IDENTIFIED BY PASSWORD '*D7C30035422305016B741ECBE18A66054165A6D2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]: The container Openstacklib::Db::Mysql::Host_access[nova_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_grant[nova@%/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_grant[nova@%/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'172.17.1.12' IDENTIFIED BY PASSWORD '*D7C30035422305016B741ECBE18A66054165A6D2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]/Mysql_user[nova@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]/Mysql_user[nova@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'172.17.1.12''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]/Mysql_grant[nova@172.17.1.12/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]/Mysql_grant[nova@172.17.1.12/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'172.17.1.15' IDENTIFIED BY PASSWORD '*D7C30035422305016B741ECBE18A66054165A6D2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_user[nova@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_user[nova@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'172.17.1.15''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_grant[nova@172.17.1.15/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_grant[nova@172.17.1.15/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[nova]: Unscheduling all events on Openstacklib::Db::Mysql[nova]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'172.17.1.12''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.12]/Mysql_grant[nova@172.17.1.12/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.12]/Mysql_grant[nova@172.17.1.12/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'172.17.1.15''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]/Mysql_grant[nova@172.17.1.15/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]/Mysql_grant[nova@172.17.1.15/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[nova_cell0]: Unscheduling all events on Openstacklib::Db::Mysql[nova_cell0]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'%' IDENTIFIED BY PASSWORD '*D7C30035422305016B741ECBE18A66054165A6D2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]: The container Openstacklib::Db::Mysql::Host_access[nova_api_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_grant[nova_api@%/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_grant[nova_api@%/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'172.17.1.12' IDENTIFIED BY PASSWORD '*D7C30035422305016B741ECBE18A66054165A6D2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12]/Mysql_user[nova_api@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12]/Mysql_user[nova_api@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'172.17.1.12''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12]/Mysql_grant[nova_api@172.17.1.12/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12]/Mysql_grant[nova_api@172.17.1.12/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'172.17.1.15' IDENTIFIED BY PASSWORD '*D7C30035422305016B741ECBE18A66054165A6D2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_user[nova_api@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_user[nova_api@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'172.17.1.15''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_grant[nova_api@172.17.1.15/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_grant[nova_api@172.17.1.15/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[nova_api]: Unscheduling all events on Openstacklib::Db::Mysql[nova_api]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'%' IDENTIFIED BY PASSWORD '*D7C30035422305016B741ECBE18A66054165A6D2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_grant[nova_placement@%/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_grant[nova_placement@%/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'172.17.1.12' IDENTIFIED BY PASSWORD '*D7C30035422305016B741ECBE18A66054165A6D2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12]/Mysql_user[nova_placement@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12]/Mysql_user[nova_placement@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'172.17.1.12''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12]/Mysql_grant[nova_placement@172.17.1.12/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12]/Mysql_grant[nova_placement@172.17.1.12/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'172.17.1.15' IDENTIFIED BY PASSWORD '*D7C30035422305016B741ECBE18A66054165A6D2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_user[nova_placement@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_user[nova_placement@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'172.17.1.15''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[nova_placement]: Unscheduling all events on Openstacklib::Db::Mysql[nova_placement]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'%' IDENTIFIED BY PASSWORD '*F879A499E15BFAEE3E4E61CE2CDFBB35A3BDF736''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]: The container Openstacklib::Db::Mysql::Host_access[sahara_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'%''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_grant[sahara@%/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_grant[sahara@%/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'172.17.1.12' IDENTIFIED BY PASSWORD '*F879A499E15BFAEE3E4E61CE2CDFBB35A3BDF736''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12]/Mysql_user[sahara@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12]/Mysql_user[sahara@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'172.17.1.12''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12]/Mysql_grant[sahara@172.17.1.12/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12]/Mysql_grant[sahara@172.17.1.12/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'172.17.1.15' IDENTIFIED BY PASSWORD '*F879A499E15BFAEE3E4E61CE2CDFBB35A3BDF736''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_user[sahara@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_user[sahara@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'172.17.1.15''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_grant[sahara@172.17.1.15/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_grant[sahara@172.17.1.15/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[sahara]: Unscheduling all events on Openstacklib::Db::Mysql[sahara]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'%' IDENTIFIED BY PASSWORD '*1B63F11E97D2EAFFE40CC12E835B9ABD37D3F013''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]: The container Openstacklib::Db::Mysql::Host_access[panko_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'%''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_grant[panko@%/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_grant[panko@%/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'172.17.1.12' IDENTIFIED BY PASSWORD '*1B63F11E97D2EAFFE40CC12E835B9ABD37D3F013''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.12' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.12' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12]/Mysql_user[panko@172.17.1.12]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12]/Mysql_user[panko@172.17.1.12]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'172.17.1.12''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12]/Mysql_grant[panko@172.17.1.12/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12]/Mysql_grant[panko@172.17.1.12/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_172.17.1.12]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'172.17.1.15' IDENTIFIED BY PASSWORD '*1B63F11E97D2EAFFE40CC12E835B9ABD37D3F013''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_user[panko@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_user[panko@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'172.17.1.15''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_grant[panko@172.17.1.15/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_grant[panko@172.17.1.15/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[panko]: Unscheduling all events on Openstacklib::Db::Mysql[panko]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Finishing transaction 46870480", > "Notice: Applied catalog in 71.35 seconds", > " Total: 103", > " Success: 103", > " Changed: 103", > " Out of sync: 103", > " Skipped: 136", > " Total: 250", > " File: 0.10", > " Mysql database: 0.18", > " Mysql grant: 1.10", > " Mysql user: 1.43", > " Last run: 1529921544", > " Pcmk bundle: 16.92", > " Exec: 32.13", > " Config retrieval: 4.21", > " Total: 74.34", > " Pcmk property: 8.56", > " Pcmk resource: 9.71", > " Config: 1529921468", > "Debug: Finishing transaction 49319880", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle'", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"unknown\", 1]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'aodh' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 58]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'gnocchi' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'panko' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "stdout: Info: Loading facts", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.27 seconds", > "Info: Applying configuration version '1529921551'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]/File[/var/lib/neutron/l3_haproxy_wrapper]/ensure: defined content as '{md5}e741722509854288f828fa66335ad134'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]/File[/var/lib/neutron/keepalived_wrapper]/ensure: defined content as '{md5}3a8c0df398c10f053e45f2f2ea5ccd93'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]/File[/var/lib/neutron/keepalived_state_change_wrapper]/ensure: defined content as '{md5}f72bfec5dc1c16b968223450454f78bf'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]/File[/var/lib/neutron/dibbler_wrapper]/ensure: defined content as '{md5}d8fd38ee59394a46ad9b984126f1e767'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]", > "Notice: Applied catalog in 0.02 seconds", > " Total: 4", > " Success: 4", > " Total: 11", > " Out of sync: 4", > " Changed: 4", > " Skipped: 7", > " File: 0.01", > " Config retrieval: 0.39", > " Total: 0.40", > " Last run: 1529921551", > " Config: 1529921551", > "stderr: + STEP=4", > "+ TAGS=file", > "+ CONFIG='include ::tripleo::profile::base::neutron::l3_agent_wrappers'", > "+ EXTRA_ARGS=", > "+ echo '{\"step\": 4}'", > "+ puppet apply --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file -e 'include ::tripleo::profile::base::neutron::l3_agent_wrappers'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.24 seconds", > "Info: Applying configuration version '1529921555'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::Dhcp_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]/File[/var/lib/neutron/dnsmasq_wrapper]/ensure: defined content as '{md5}bdbee777940c5f4b2d9089e50e2791f0'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::Dhcp_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]/File[/var/lib/neutron/dhcp_haproxy_wrapper]/ensure: defined content as '{md5}d77797fffe398a35675248a492b97d14'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]", > "Notice: Applied catalog in 0.01 seconds", > " Total: 2", > " Success: 2", > " Changed: 2", > " Out of sync: 2", > " Total: 9", > " File: 0.00", > " Config retrieval: 0.36", > " Total: 0.36", > " Last run: 1529921555", > " Config: 1529921555", > "+ CONFIG='include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'", > "+ puppet apply --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file -e 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'", > "stderr: Error: unable to find resource 'redis-bundle'", > "stdout: 59b1bfd9d05f9372dd3418e49d397a6294df846fa9d95a2b17ddceb6de2ad21e", > "stdout: 28d9c52515d65007da07e5c43362465bfafc393035b67b3fc8679a9a02b69c0a", > "stdout: f7edcbcf072aa52b80b64a776b40b24b510e5d0d3372e649e6fabcea82d2d742", > "stderr: Error: unable to find resource 'haproxy-bundle'", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/redis_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::database::redis_bundle from tripleo/profile/pacemaker/database/redis_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_docker_control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_network in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::extra_config_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_tunnel_local_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_tunnel_base_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_bind_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_fqdn in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_port in JSON backend", > "Debug: hiera(): Looking up redis_certificate_specs in JSON backend", > "Debug: hiera(): Looking up redis_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up redis_network in JSON backend", > "Debug: hiera(): Looking up redis_file_limit in JSON backend", > "Debug: importing '/etc/puppet/modules/redis/manifests/init.pp' in environment production", > "Debug: Automatically imported redis from redis into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/params.pp' in environment production", > "Debug: Automatically imported redis::params from redis/params into production", > "Debug: hiera(): Looking up redis::activerehashing in JSON backend", > "Debug: hiera(): Looking up redis::aof_load_truncated in JSON backend", > "Debug: hiera(): Looking up redis::aof_rewrite_incremental_fsync in JSON backend", > "Debug: hiera(): Looking up redis::appendfilename in JSON backend", > "Debug: hiera(): Looking up redis::appendfsync in JSON backend", > "Debug: hiera(): Looking up redis::appendonly in JSON backend", > "Debug: hiera(): Looking up redis::auto_aof_rewrite_min_size in JSON backend", > "Debug: hiera(): Looking up redis::auto_aof_rewrite_percentage in JSON backend", > "Debug: hiera(): Looking up redis::bind in JSON backend", > "Debug: hiera(): Looking up redis::output_buffer_limit_slave in JSON backend", > "Debug: hiera(): Looking up redis::output_buffer_limit_pubsub in JSON backend", > "Debug: hiera(): Looking up redis::conf_template in JSON backend", > "Debug: hiera(): Looking up redis::config_dir in JSON backend", > "Debug: hiera(): Looking up redis::config_dir_mode in JSON backend", > "Debug: hiera(): Looking up redis::config_file in JSON backend", > "Debug: hiera(): Looking up redis::config_file_mode in JSON backend", > "Debug: hiera(): Looking up redis::config_file_orig in JSON backend", > "Debug: hiera(): Looking up redis::config_group in JSON backend", > "Debug: hiera(): Looking up redis::config_owner in JSON backend", > "Debug: hiera(): Looking up redis::daemonize in JSON backend", > "Debug: hiera(): Looking up redis::databases in JSON backend", > "Debug: hiera(): Looking up redis::default_install in JSON backend", > "Debug: hiera(): Looking up redis::dbfilename in JSON backend", > "Debug: hiera(): Looking up redis::extra_config_file in JSON backend", > "Debug: hiera(): Looking up redis::hash_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::hash_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::hll_sparse_max_bytes in JSON backend", > "Debug: hiera(): Looking up redis::hz in JSON backend", > "Debug: hiera(): Looking up redis::latency_monitor_threshold in JSON backend", > "Debug: hiera(): Looking up redis::list_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::list_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::log_dir in JSON backend", > "Debug: hiera(): Looking up redis::log_dir_mode in JSON backend", > "Debug: hiera(): Looking up redis::log_file in JSON backend", > "Debug: hiera(): Looking up redis::log_level in JSON backend", > "Debug: hiera(): Looking up redis::manage_package in JSON backend", > "Debug: hiera(): Looking up redis::manage_repo in JSON backend", > "Debug: hiera(): Looking up redis::masterauth in JSON backend", > "Debug: hiera(): Looking up redis::maxclients in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory_policy in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory_samples in JSON backend", > "Debug: hiera(): Looking up redis::min_slaves_max_lag in JSON backend", > "Debug: hiera(): Looking up redis::min_slaves_to_write in JSON backend", > "Debug: hiera(): Looking up redis::no_appendfsync_on_rewrite in JSON backend", > "Debug: hiera(): Looking up redis::notify_keyspace_events in JSON backend", > "Debug: hiera(): Looking up redis::notify_service in JSON backend", > "Debug: hiera(): Looking up redis::managed_by_cluster_manager in JSON backend", > "Debug: hiera(): Looking up redis::package_ensure in JSON backend", > "Debug: hiera(): Looking up redis::package_name in JSON backend", > "Debug: hiera(): Looking up redis::pid_file in JSON backend", > "Debug: hiera(): Looking up redis::port in JSON backend", > "Debug: hiera(): Looking up redis::protected_mode in JSON backend", > "Debug: hiera(): Looking up redis::ppa_repo in JSON backend", > "Debug: hiera(): Looking up redis::rdbcompression in JSON backend", > "Debug: hiera(): Looking up redis::repl_backlog_size in JSON backend", > "Debug: hiera(): Looking up redis::repl_backlog_ttl in JSON backend", > "Debug: hiera(): Looking up redis::repl_disable_tcp_nodelay in JSON backend", > "Debug: hiera(): Looking up redis::repl_ping_slave_period in JSON backend", > "Debug: hiera(): Looking up redis::repl_timeout in JSON backend", > "Debug: hiera(): Looking up redis::requirepass in JSON backend", > "Debug: hiera(): Looking up redis::save_db_to_disk in JSON backend", > "Debug: hiera(): Looking up redis::save_db_to_disk_interval in JSON backend", > "Debug: hiera(): Looking up redis::service_enable in JSON backend", > "Debug: hiera(): Looking up redis::service_ensure in JSON backend", > "Debug: hiera(): Looking up redis::service_group in JSON backend", > "Debug: hiera(): Looking up redis::service_hasrestart in JSON backend", > "Debug: hiera(): Looking up redis::service_hasstatus in JSON backend", > "Debug: hiera(): Looking up redis::service_manage in JSON backend", > "Debug: hiera(): Looking up redis::service_name in JSON backend", > "Debug: hiera(): Looking up redis::service_provider in JSON backend", > "Debug: hiera(): Looking up redis::service_user in JSON backend", > "Debug: hiera(): Looking up redis::set_max_intset_entries in JSON backend", > "Debug: hiera(): Looking up redis::slave_priority in JSON backend", > "Debug: hiera(): Looking up redis::slave_read_only in JSON backend", > "Debug: hiera(): Looking up redis::slave_serve_stale_data in JSON backend", > "Debug: hiera(): Looking up redis::slaveof in JSON backend", > "Debug: hiera(): Looking up redis::slowlog_log_slower_than in JSON backend", > "Debug: hiera(): Looking up redis::slowlog_max_len in JSON backend", > "Debug: hiera(): Looking up redis::stop_writes_on_bgsave_error in JSON backend", > "Debug: hiera(): Looking up redis::syslog_enabled in JSON backend", > "Debug: hiera(): Looking up redis::syslog_facility in JSON backend", > "Debug: hiera(): Looking up redis::tcp_backlog in JSON backend", > "Debug: hiera(): Looking up redis::tcp_keepalive in JSON backend", > "Debug: hiera(): Looking up redis::timeout in JSON backend", > "Debug: hiera(): Looking up redis::unixsocket in JSON backend", > "Debug: hiera(): Looking up redis::unixsocketperm in JSON backend", > "Debug: hiera(): Looking up redis::ulimit in JSON backend", > "Debug: hiera(): Looking up redis::workdir in JSON backend", > "Debug: hiera(): Looking up redis::workdir_mode in JSON backend", > "Debug: hiera(): Looking up redis::zset_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::zset_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::cluster_enabled in JSON backend", > "Debug: hiera(): Looking up redis::cluster_config_file in JSON backend", > "Debug: hiera(): Looking up redis::cluster_node_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/redis/manifests/preinstall.pp' in environment production", > "Debug: Automatically imported redis::preinstall from redis/preinstall into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/install.pp' in environment production", > "Debug: Automatically imported redis::install from redis/install into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/config.pp' in environment production", > "Debug: Automatically imported redis::config from redis/config into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/instance.pp' in environment production", > "Debug: Automatically imported redis::instance from redis/instance into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/ulimit.pp' in environment production", > "Debug: Automatically imported redis::ulimit from redis/ulimit into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/service.pp' in environment production", > "Debug: Automatically imported redis::service from redis/service into production", > "Debug: hiera(): Looking up redis_short_node_names in JSON backend", > "Debug: Scope(Redis::Instance[default]): Retrieving template redis/redis.conf.3.2.erb", > "Debug: template[/etc/puppet/modules/redis/templates/redis.conf.3.2.erb]: Bound template variables for /etc/puppet/modules/redis/templates/redis.conf.3.2.erb in 0.01 seconds", > "Debug: template[/etc/puppet/modules/redis/templates/redis.conf.3.2.erb]: Interpolated template /etc/puppet/modules/redis/templates/redis.conf.3.2.erb in 0.01 seconds", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[redis] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-redis-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[redis-bundle] with 'before'", > "Debug: Adding relationship from Class[Redis::Preinstall] to Class[Redis::Install] with 'before'", > "Debug: Adding relationship from Class[Redis::Install] to Class[Redis::Config] with 'before'", > "Debug: File[/etc/redis]: Adding default for owner", > "Debug: File[/etc/redis]: Adding default for group", > "Debug: File[/etc/systemd/system/redis.service.d/]: Adding default for mode", > "Debug: File[/etc/redis.conf.puppet]: Adding default for owner", > "Debug: File[/etc/redis.conf.puppet]: Adding default for group", > "Debug: File[/etc/redis.conf.puppet]: Adding default for mode", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.36 seconds", > "Info: Applying configuration version '1529921562'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[redis]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-redis-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[redis-bundle]", > "Debug: /Stage[main]/Redis::Preinstall/before: subscribes to Class[Redis::Install]", > "Debug: /Stage[main]/Redis::Install/before: subscribes to Class[Redis::Config]", > "Debug: /Stage[main]/Redis::Ulimit/Augeas[Systemd redis ulimit]/notify: subscribes to Exec[systemd-reload-redis]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/require: subscribes to Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]/subscribe: subscribes to File[/etc/redis.conf.puppet]", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]: Adding autorequire relationship with File[/etc/systemd/system/redis.service.d/]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Redis_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Redis_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Preinstall]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Preinstall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Redis::Install/Package[redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis::Install/Package[redis]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Config]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Debug: /Stage[main]/Redis::Config/File[/etc/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Debug: /Stage[main]/Redis::Config/File[/var/log/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Debug: /Stage[main]/Redis::Config/File[/var/lib/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Debug: Redis::Instance[default]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Redis::Instance[default]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Ulimit]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Ulimit]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]: The container Class[Redis::Ulimit] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]: The container Class[Redis::Ulimit] will propagate my refresh event", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): sending command 'defnode' with params [\"nofile\", \"/etc/systemd/system/redis.service.d/limits.conf/Service/LimitNOFILE\", \"\"]", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): sending command 'set' with params [\"$nofile/value\", \"10240\"]", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Skipping because no files were changed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Closed the augeas connection", > "Info: Class[Redis::Ulimit]: Unscheduling all events on Class[Redis::Ulimit]", > "Debug: Class[Redis::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Service]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Redis/Exec[systemd-reload-redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis/Exec[systemd-reload-redis]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[redis-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[redis-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-71wztx returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-71wztx property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}6152d9c54fa9e5398c46445d29df2eaf'", > "Info: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]: Scheduling refresh of Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]: The container Redis::Instance[default] will propagate my refresh event", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Unscheduling all events on Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]", > "Info: Redis::Instance[default]: Unscheduling all events on Redis::Instance[default]", > "Info: Class[Redis::Config]: Unscheduling all events on Class[Redis::Config]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-zn4654 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-zn4654 property show | grep redis-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep redis-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1jfsl5g returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1jfsl5g property set --node controller-0 redis-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1jfsl5g diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1jfsl5g.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 redis-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/Pcmk_property[property-controller-0-redis-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/Pcmk_property[property-controller-0-redis-role]: The container Pacemaker::Property[redis-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[redis-role-controller-0]: Unscheduling all events on Pacemaker::Property[redis-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[redis-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Bundle[redis-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1kg7ntn returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1kg7ntn constraint list | grep location-redis-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-x9fzp4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-x9fzp4 resource show redis-bundle > /dev/null 2>&1", > "Debug: Exists: bundle redis-bundle exists 1 location exists 1 deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nwg48l returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nwg48l resource bundle create redis-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest replicas=1 masters=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=redis-cfg-files source-dir=/var/lib/kolla/config_files/redis.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=redis-cfg-data-redis source-dir=/var/lib/config-data/puppet-generated/redis/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=redis-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=redis-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=redis-lib source-dir=/var/lib/redis target-dir=/var/lib/redis options=rw storage-map id=redis-log source-dir=/var/log/containers/redis target-dir=/var/log/redis options=rw storage-map id=redis-run source-dir=/var/run/redis target-dir=/var/run/redis options=rw storage-map id=redis-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=redis-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=redis-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=redis-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=redis-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3124 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nwg48l diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nwg48l.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: location_rule_create: constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ov7g0i returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ov7g0i constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ov7g0i diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ov7g0i.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-zw4l1x returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-zw4l1x resource enable redis-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-zw4l1x diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-zw4l1x.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Bundle[redis-bundle]/Pcmk_bundle[redis-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Bundle[redis-bundle]/Pcmk_bundle[redis-bundle]: The container Pacemaker::Resource::Bundle[redis-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[redis-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: Pacemaker::Resource::Ocf[redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ocf[redis]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-pt8mjm returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-pt8mjm constraint list | grep location-redis-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-2zhj7n returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-2zhj7n resource show redis > /dev/null 2>&1", > "Debug: Exists: resource redis exists 1 location exists 0 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1enqa1h returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1enqa1h resource create redis ocf:heartbeat:redis wait_last_known_master=true meta notify=true ordered=true interleave=true container-attribute-target=host op start timeout=200s stop timeout=200s bundle redis-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1enqa1h diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1enqa1h.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/Pcmk_resource[redis]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/Pcmk_resource[redis]: The container Pacemaker::Resource::Ocf[redis] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[redis]: Unscheduling all events on Pacemaker::Resource::Ocf[redis]", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Finishing transaction 40559320", > "Debug: Stored state in 0.00 seconds", > "Notice: Applied catalog in 36.45 seconds", > " Total: 13", > " Success: 13", > " Changed: 13", > " Out of sync: 13", > " Total: 42", > " Augeas: 0.01", > " File: 0.02", > " Config retrieval: 1.49", > " Pcmk resource: 10.07", > " Last run: 1529921600", > " Pcmk bundle: 17.39", > " Total: 37.77", > " Pcmk property: 8.78", > " Config: 1529921562", > "Debug: Finishing transaction 46204160", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle'", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/haproxy_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::haproxy_bundle from tripleo/profile/pacemaker/haproxy_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::haproxy_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::ca_bundle in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::crl_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::internal_certs_directory in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::internal_keys_directory in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::deployed_ssl_cert_path in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up haproxy_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ca_bundle in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::crl_file in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::service_certificate in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/haproxy.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::haproxy from tripleo/profile/base/haproxy into production", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::certificates_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::manage_firewall in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::step in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::manage_firewall in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy from tripleo/haproxy into production", > "Debug: hiera(): Looking up tripleo::haproxy::controller_virtual_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::public_virtual_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_service_manage in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_global_maxconn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_default_maxconn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_default_timeout in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_listen_bind_param in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_member_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_log_address in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::activate_httplog in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_globals_override in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_defaults_override in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_daemon in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_socket_access_level in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_user in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_password in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::controller_hosts in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::controller_hosts_names in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::use_internal_certificates in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ssl_cipher_suite in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ssl_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_certificate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::neutron in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::cinder in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::congress in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::manila in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::sahara in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::tacker in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::trove in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_metadata in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::aodh in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::panko in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::barbican in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mistral in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_inspector in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::octavia in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::designate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::kubernetes_master in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_clustercheck in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_max_conn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_member_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::rabbitmq in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::etcd in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::docker_registry in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::redis in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::redis_password in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::midonet_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ceph_rgw in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::opendaylight in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs_manage_lb in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_ws in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ui in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::aodh_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::barbican_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ceph_rgw_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::cinder_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::congress_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::designate_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::docker_registry_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_inspector_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::kubernetes_master_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::manila_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mistral_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::neutron_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::octavia_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::opendaylight_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::panko_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_metadata_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::etcd_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::sahara_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::tacker_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::trove_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::service_ports in JSON backend", > "Debug: hiera(): Looking up controller_node_ips in JSON backend", > "Debug: hiera(): Looking up controller_node_names in JSON backend", > "Debug: hiera(): Looking up nova_vnc_proxy_enabled in JSON backend", > "Debug: hiera(): Looking up swift_proxy_enabled in JSON backend", > "Debug: hiera(): Looking up heat_api_enabled in JSON backend", > "Debug: hiera(): Looking up heat_api_cfn_enabled in JSON backend", > "Debug: hiera(): Looking up horizon_enabled in JSON backend", > "Debug: hiera(): Looking up mysql_enabled in JSON backend", > "Debug: hiera(): Looking up kubernetes_master_enabled in JSON backend", > "Debug: hiera(): Looking up etcd_enabled in JSON backend", > "Debug: hiera(): Looking up enable_docker_registry in JSON backend", > "Debug: hiera(): Looking up redis_enabled in JSON backend", > "Debug: hiera(): Looking up ceph_rgw_enabled in JSON backend", > "Debug: hiera(): Looking up opendaylight_api_enabled in JSON backend", > "Debug: hiera(): Looking up ovn_dbs_enabled in JSON backend", > "Debug: hiera(): Looking up tripleo_ui_enabled in JSON backend", > "Debug: hiera(): Looking up enable_ui in JSON backend", > "Debug: hiera(): Looking up aodh_api_network in JSON backend", > "Debug: hiera(): Looking up barbican_api_network in JSON backend", > "Debug: hiera(): Looking up ceph_rgw_network in JSON backend", > "Debug: hiera(): Looking up cinder_api_network in JSON backend", > "Debug: hiera(): Looking up congress_api_network in JSON backend", > "Debug: hiera(): Looking up designate_api_network in JSON backend", > "Debug: hiera(): Looking up docker_registry_network in JSON backend", > "Debug: hiera(): Looking up glance_api_network in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_network in JSON backend", > "Debug: hiera(): Looking up heat_api_network in JSON backend", > "Debug: hiera(): Looking up heat_api_cfn_network in JSON backend", > "Debug: hiera(): Looking up horizon_network in JSON backend", > "Debug: hiera(): Looking up ironic_inspector_network in JSON backend", > "Debug: hiera(): Looking up ironic_api_network in JSON backend", > "Debug: hiera(): Looking up kubernetes_master_network in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_network in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_network in JSON backend", > "Debug: hiera(): Looking up manila_api_network in JSON backend", > "Debug: hiera(): Looking up mistral_api_network in JSON backend", > "Debug: hiera(): Looking up neutron_api_network in JSON backend", > "Debug: hiera(): Looking up nova_api_network in JSON backend", > "Debug: hiera(): Looking up nova_vnc_proxy_network in JSON backend", > "Debug: hiera(): Looking up nova_placement_network in JSON backend", > "Debug: hiera(): Looking up octavia_api_network in JSON backend", > "Debug: hiera(): Looking up opendaylight_api_network in JSON backend", > "Debug: hiera(): Looking up panko_api_network in JSON backend", > "Debug: hiera(): Looking up ovn_dbs_network in JSON backend", > "Debug: hiera(): Looking up ec2_api_network in JSON backend", > "Debug: hiera(): Looking up etcd_network in JSON backend", > "Debug: hiera(): Looking up sahara_api_network in JSON backend", > "Debug: hiera(): Looking up swift_proxy_network in JSON backend", > "Debug: hiera(): Looking up tacker_api_network in JSON backend", > "Debug: hiera(): Looking up trove_api_network in JSON backend", > "Debug: hiera(): Looking up zaqar_api_network in JSON backend", > "Debug: hiera(): Looking up mysql_vip in JSON backend", > "Debug: hiera(): Looking up rabbitmq_vip in JSON backend", > "Debug: hiera(): Looking up redis_vip in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/init.pp' in environment production", > "Debug: Automatically imported haproxy from haproxy into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/params.pp' in environment production", > "Debug: Automatically imported haproxy::params from haproxy/params into production", > "Debug: hiera(): Looking up haproxy::package_ensure in JSON backend", > "Debug: hiera(): Looking up haproxy::package_name in JSON backend", > "Debug: hiera(): Looking up haproxy::service_ensure in JSON backend", > "Debug: hiera(): Looking up haproxy::service_options in JSON backend", > "Debug: hiera(): Looking up haproxy::sysconfig_options in JSON backend", > "Debug: hiera(): Looking up haproxy::merge_options in JSON backend", > "Debug: hiera(): Looking up haproxy::restart_command in JSON backend", > "Debug: hiera(): Looking up haproxy::custom_fragment in JSON backend", > "Debug: hiera(): Looking up haproxy::config_dir in JSON backend", > "Debug: hiera(): Looking up haproxy::config_file in JSON backend", > "Debug: hiera(): Looking up haproxy::manage_config_dir in JSON backend", > "Debug: hiera(): Looking up haproxy::config_validate_cmd in JSON backend", > "Debug: hiera(): Looking up haproxy::manage_service in JSON backend", > "Debug: hiera(): Looking up haproxy::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/instance.pp' in environment production", > "Debug: Automatically imported haproxy::instance from haproxy/instance into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/endpoint.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::endpoint from tripleo/haproxy/endpoint into production", > "Debug: hiera(): Looking up enabled_services in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/service_endpoints.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::service_endpoints from tripleo/haproxy/service_endpoints into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/stats.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::stats from tripleo/haproxy/stats into production", > "Debug: hiera(): Looking up tripleo::haproxy::stats::certificate in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/listen.pp' in environment production", > "Debug: Automatically imported haproxy::listen from haproxy/listen into production", > "Debug: hiera(): Looking up keystone_admin_api_vip in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_node_ips in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_node_names in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_vip in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_node_ips in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_node_names in JSON backend", > "Debug: hiera(): Looking up neutron_api_vip in JSON backend", > "Debug: hiera(): Looking up neutron_api_node_ips in JSON backend", > "Debug: hiera(): Looking up neutron_api_node_names in JSON backend", > "Debug: hiera(): Looking up cinder_api_vip in JSON backend", > "Debug: hiera(): Looking up cinder_api_node_ips in JSON backend", > "Debug: hiera(): Looking up cinder_api_node_names in JSON backend", > "Debug: hiera(): Looking up sahara_api_vip in JSON backend", > "Debug: hiera(): Looking up sahara_api_node_ips in JSON backend", > "Debug: hiera(): Looking up sahara_api_node_names in JSON backend", > "Debug: hiera(): Looking up glance_api_vip in JSON backend", > "Debug: hiera(): Looking up glance_api_node_ips in JSON backend", > "Debug: hiera(): Looking up glance_api_node_names in JSON backend", > "Debug: hiera(): Looking up nova_api_vip in JSON backend", > "Debug: hiera(): Looking up nova_api_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_api_node_names in JSON backend", > "Debug: hiera(): Looking up nova_placement_vip in JSON backend", > "Debug: hiera(): Looking up nova_placement_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_placement_node_names in JSON backend", > "Debug: hiera(): Looking up nova_metadata_vip in JSON backend", > "Debug: hiera(): Looking up nova_metadata_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_metadata_node_names in JSON backend", > "Debug: hiera(): Looking up aodh_api_vip in JSON backend", > "Debug: hiera(): Looking up aodh_api_node_ips in JSON backend", > "Debug: hiera(): Looking up aodh_api_node_names in JSON backend", > "Debug: hiera(): Looking up panko_api_vip in JSON backend", > "Debug: hiera(): Looking up panko_api_node_ips in JSON backend", > "Debug: hiera(): Looking up panko_api_node_names in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_vip in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_node_ips in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_node_names in JSON backend", > "Debug: hiera(): Looking up swift_proxy_vip in JSON backend", > "Debug: hiera(): Looking up swift_proxy_node_ips in JSON backend", > "Debug: hiera(): Looking up swift_proxy_node_names in JSON backend", > "Debug: hiera(): Looking up heat_api_vip in JSON backend", > "Debug: hiera(): Looking up heat_api_node_ips in JSON backend", > "Debug: hiera(): Looking up heat_api_node_names in JSON backend", > "Debug: hiera(): Looking up horizon_vip in JSON backend", > "Debug: hiera(): Looking up horizon_node_ips in JSON backend", > "Debug: hiera(): Looking up horizon_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/horizon_endpoint.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::horizon_endpoint from tripleo/haproxy/horizon_endpoint into production", > "Debug: hiera(): Looking up tripleo::haproxy::horizon_endpoint::public_certificate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon::options in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/balancermember.pp' in environment production", > "Debug: Automatically imported haproxy::balancermember from haproxy/balancermember into production", > "Debug: hiera(): Looking up mysql_node_ips in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall.pp' in environment production", > "Debug: Automatically imported tripleo::firewall from tripleo/firewall into production", > "Debug: hiera(): Looking up tripleo::firewall::firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_pre_extras in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_post_extras in JSON backend", > "Debug: Resource class[tripleo::firewall::pre] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::pre] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/pre.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::pre from tripleo/firewall/pre into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/init.pp' in environment production", > "Debug: Automatically imported firewall from firewall into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/params.pp' in environment production", > "Debug: Automatically imported firewall::params from firewall/params into production", > "Debug: hiera(): Looking up firewall::ensure in JSON backend", > "Debug: hiera(): Looking up firewall::ensure_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::pkg_ensure in JSON backend", > "Debug: hiera(): Looking up firewall::service_name in JSON backend", > "Debug: hiera(): Looking up firewall::service_name_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::package_name in JSON backend", > "Debug: hiera(): Looking up firewall::ebtables_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux.pp' in environment production", > "Debug: Automatically imported firewall::linux from firewall/linux into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux/redhat.pp' in environment production", > "Debug: Automatically imported firewall::linux::redhat from firewall/linux/redhat into production", > "Debug: hiera(): Looking up firewall::linux::redhat::package_ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/rule.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::rule from tripleo/firewall/rule into production", > "Debug: Resource class[tripleo::firewall::post] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::post] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/post.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::post from tripleo/firewall/post into production", > "Debug: hiera(): Looking up tripleo::firewall::post::debug in JSON backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Debug: hiera(): Looking up service_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/service_rules.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::service_rules from tripleo/firewall/service_rules into production", > "Debug: hiera(): Looking up redis_node_ips in JSON backend", > "Debug: hiera(): Looking up redis_node_names in JSON backend", > "Debug: hiera(): Looking up midonet_cluster_vip in JSON backend", > "Debug: hiera(): Looking up haproxy_short_node_names in JSON backend", > "Debug: hiera(): Looking up controller_virtual_ip in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp' in environment production", > "Debug: Automatically imported tripleo::pacemaker::haproxy_with_vip from tripleo/pacemaker/haproxy_with_vip into production", > "Debug: hiera(): Looking up public_virtual_ip in JSON backend", > "Debug: hiera(): Looking up network_virtual_ips in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/config.pp' in environment production", > "Debug: Automatically imported haproxy::config from haproxy/config into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/install.pp' in environment production", > "Debug: Automatically imported haproxy::install from haproxy/install into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/service.pp' in environment production", > "Debug: Automatically imported haproxy::service from haproxy/service into production", > "Debug: hiera(): Looking up tripleo.aodh_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_client.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_client.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_client::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_client::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_compute.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_compute.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_compute::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_compute::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_compute.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_compute.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_compute::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_compute::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_migration_target.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_migration_target.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_migration_target::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_migration_target::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_osd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_osd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_osd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_osd::haproxy_userlists in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/backend.pp' in environment production", > "Debug: Automatically imported haproxy::backend from haproxy/backend into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/globals.pp' in environment production", > "Debug: Automatically imported haproxy::globals from haproxy/globals into production", > "Debug: hiera(): Looking up haproxy::globals::sort_options_alphabetic in JSON backend", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.07 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_mode.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_mode.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_mode.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_mode.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_options.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_options.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_options.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_options.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.07 seconds", > "Debug: importing '/etc/puppet/modules/concat/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/concat/manifests/fragment.pp' in environment production", > "Debug: Automatically imported concat::fragment from concat/fragment into production", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::neutron::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::cinder::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for member_options", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::sahara::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for member_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::aodh::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::panko::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn::options in JSON backend", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.01 seconds", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.02 seconds", > "Debug: Scope(Haproxy::Balancermember[horizon_172.17.1.12_controller-0.internalapi.localdomain]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.07 seconds", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Balancermember[mysql-backup]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.00 seconds", > "Debug: hiera(): Looking up tripleo.aodh_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.firewall_rules in JSON backend", > "Debug: hiera(): Looking up memcached_network in JSON backend", > "Debug: hiera(): Looking up internal_api_subnet in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up snmpd_network in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::firewall_rules in JSON backend", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.01 seconds", > "Debug: Scope(Haproxy::Balancermember[redis]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up haproxy_docker in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/ip.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::ip from pacemaker/resource/ip into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/constraint/order.pp' in environment production", > "Debug: Automatically imported pacemaker::constraint::order from pacemaker/constraint/order into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/constraint/colocation.pp' in environment production", > "Debug: Automatically imported pacemaker::constraint::colocation from pacemaker/constraint/colocation into production", > "Debug: Scope(Haproxy::Config[haproxy]): Retrieving template haproxy/haproxy-base.cfg.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[keystone_admin]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[keystone_public]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[neutron]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[cinder]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[sahara]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[glance_api]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.05 seconds", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_osapi]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_placement]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_metadata]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_novncproxy]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[aodh]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[panko]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[gnocchi]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[swift_proxy_server]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[heat_api]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[heat_cfn]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up pacemaker::resource::ip::deep_compare in JSON backend", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-192.168.24.12-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-192.168.24.12-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-10.0.0.110-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-10.0.0.110-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.1.16-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.1.16-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.1.15-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.1.15-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.3.18-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.3.18-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.4.11-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.4.11-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-192.168.24.12] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-10.0.0.110] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.3.18] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.4.11] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-haproxy-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[corosync] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[firewalld] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[iptables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ip6tables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[124 snmp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[124 snmp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[124 snmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[124 snmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Haproxy::Listen[haproxy.stats] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[horizon] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[mysql] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[redis] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[keystone_admin] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[keystone_public] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[neutron] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[cinder] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[sahara] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[glance_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_osapi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_placement] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_metadata] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_novncproxy] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[aodh] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[panko] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[gnocchi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[swift_proxy_server] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[heat_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[heat_cfn] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[horizon_172.17.1.12_controller-0.internalapi.localdomain] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[mysql-backup] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[redis] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[keystone_admin] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[keystone_public] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[neutron] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[cinder] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[sahara] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[glance_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_osapi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_placement] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_metadata] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_novncproxy] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[aodh] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[panko] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[gnocchi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[swift_proxy_server] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[heat_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[heat_cfn] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Anchor[haproxy::haproxy::begin] to Haproxy::Install[haproxy] with 'before'", > "Debug: Adding relationship from Haproxy::Install[haproxy] to Haproxy::Config[haproxy] with 'before'", > "Debug: Adding relationship from Haproxy::Config[haproxy] to Haproxy::Service[haproxy] with 'notify'", > "Debug: Adding relationship from Haproxy::Service[haproxy] to Anchor[haproxy::haproxy::end] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[control_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[control_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[control_vip-then-haproxy] to Pacemaker::Constraint::Colocation[control_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[public_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[public_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[public_vip-then-haproxy] to Pacemaker::Constraint::Colocation[public_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[redis_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[redis_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[redis_vip-then-haproxy] to Pacemaker::Constraint::Colocation[redis_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[internal_api_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] to Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[storage_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[storage_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[storage_vip-then-haproxy] to Pacemaker::Constraint::Colocation[storage_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[storage_mgmt_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] to Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 5.00 seconds", > "Debug: /Firewall[000 accept related established rules ipv4]: [validate]", > "Debug: /Firewall[000 accept related established rules ipv6]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv4]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv6]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: [validate]", > "Debug: /Firewall[003 accept ssh ipv4]: [validate]", > "Debug: /Firewall[003 accept ssh ipv6]: [validate]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: [validate]", > "Debug: /Firewall[998 log all ipv4]: [validate]", > "Debug: /Firewall[998 log all ipv6]: [validate]", > "Debug: /Firewall[999 drop all ipv4]: [validate]", > "Debug: /Firewall[999 drop all ipv6]: [validate]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 redis_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 redis_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 panko_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 panko_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[128 aodh-api ipv4]: [validate]", > "Debug: /Firewall[128 aodh-api ipv6]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv4]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv6]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv4]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv6]: [validate]", > "Debug: /Firewall[119 cinder ipv4]: [validate]", > "Debug: /Firewall[119 cinder ipv6]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv4]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv6]: [validate]", > "Debug: /Firewall[112 glance_api ipv4]: [validate]", > "Debug: /Firewall[112 glance_api ipv6]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv4]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv6]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv4]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv6]: [validate]", > "Debug: /Firewall[125 heat_api ipv4]: [validate]", > "Debug: /Firewall[125 heat_api ipv6]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv4]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv6]: [validate]", > "Debug: /Firewall[127 horizon ipv4]: [validate]", > "Debug: /Firewall[127 horizon ipv6]: [validate]", > "Debug: /Firewall[111 keystone ipv4]: [validate]", > "Debug: /Firewall[111 keystone ipv6]: [validate]", > "Debug: /Firewall[121 memcached ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: [validate]", > "Debug: /Firewall[114 neutron api ipv4]: [validate]", > "Debug: /Firewall[114 neutron api ipv6]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv4]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv6]: [validate]", > "Debug: /Firewall[113 nova_api ipv4]: [validate]", > "Debug: /Firewall[113 nova_api ipv6]: [validate]", > "Debug: /Firewall[138 nova_placement ipv4]: [validate]", > "Debug: /Firewall[138 nova_placement ipv6]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: [validate]", > "Debug: /Firewall[105 ntp ipv4]: [validate]", > "Debug: /Firewall[105 ntp ipv6]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv4]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv6]: [validate]", > "Debug: /Firewall[140 panko-api ipv4]: [validate]", > "Debug: /Firewall[140 panko-api ipv6]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv4]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv6]: [validate]", > "Debug: /Firewall[132 sahara ipv4]: [validate]", > "Debug: /Firewall[132 sahara ipv6]: [validate]", > "Debug: /Firewall[124 snmp ipv4]: [validate]", > "Debug: /Firewall[122 swift proxy ipv4]: [validate]", > "Debug: /Firewall[122 swift proxy ipv6]: [validate]", > "Debug: /Firewall[123 swift storage ipv4]: [validate]", > "Debug: /Firewall[123 swift storage ipv6]: [validate]", > "Info: Applying configuration version '1529921605'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-192.168.24.12-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-192.168.24.12-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-10.0.0.110-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-10.0.0.110-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.1.16-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.1.16-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.1.15-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.1.15-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.3.18-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.3.18-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.4.11-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.4.11-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-192.168.24.12]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-10.0.0.110]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.1.16]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.1.15]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.3.18]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.4.11]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-haproxy-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Haproxy::Stats/Haproxy::Listen[haproxy.stats]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy::Horizon_endpoint/Haproxy::Listen[horizon]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy::Horizon_endpoint/Haproxy::Balancermember[horizon_172.17.1.12_controller-0.internalapi.localdomain]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Listen[mysql]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Balancermember[mysql-backup]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/require: subscribes to Package[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/require: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/subscribe: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[ip6tables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Listen[redis]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Balancermember[redis]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]/subscribe: subscribes to Class[Haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[control_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[public_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[redis_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[storage_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/notify: subscribes to Haproxy::Service[haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/before: subscribes to Haproxy::Config[haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Service[haproxy]/before: subscribes to Anchor[haproxy::haproxy::end]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]/before: subscribes to Haproxy::Install[haproxy]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Haproxy::Listen[keystone_admin]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Haproxy::Balancermember[keystone_admin]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Haproxy::Listen[keystone_public]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Haproxy::Balancermember[keystone_public]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Haproxy::Listen[neutron]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Haproxy::Balancermember[neutron]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Haproxy::Listen[cinder]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Haproxy::Balancermember[cinder]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Haproxy::Listen[sahara]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Haproxy::Balancermember[sahara]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Haproxy::Listen[glance_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Haproxy::Balancermember[glance_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Haproxy::Listen[nova_osapi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Haproxy::Balancermember[nova_osapi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Haproxy::Listen[nova_placement]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Haproxy::Balancermember[nova_placement]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Haproxy::Listen[nova_metadata]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Haproxy::Balancermember[nova_metadata]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Haproxy::Listen[nova_novncproxy]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Haproxy::Balancermember[nova_novncproxy]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Haproxy::Listen[aodh]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Haproxy::Balancermember[aodh]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Haproxy::Listen[panko]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Haproxy::Balancermember[panko]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Haproxy::Listen[gnocchi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Haproxy::Balancermember[gnocchi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Haproxy::Listen[swift_proxy_server]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Haproxy::Balancermember[swift_proxy_server]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Haproxy::Listen[heat_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Haproxy::Balancermember[heat_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Haproxy::Listen[heat_cfn]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Haproxy::Balancermember[heat_cfn]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[control_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[public_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/Concat_file[/etc/haproxy/haproxy.cfg]/before: subscribes to File[/etc/haproxy/haproxy.cfg]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/Concat_file[/etc/haproxy/haproxy.cfg]: Skipping automatic relationship with File[/etc/haproxy/haproxy.cfg]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Adding autorequire relationship with File[/etc/haproxy]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Haproxy_bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Haproxy_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Instance[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Instance[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_evaluator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_evaluator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_listener]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_listener]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_notifier]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_notifier]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ca_certs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ca_certs]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_central]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_central]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_notification]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_notification]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[certmonger_user]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[certmonger_user]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_backup]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_volume]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_volume]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[clustercheck]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[clustercheck]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[docker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[docker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_registry_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_registry_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_metricd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_metricd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cloudwatch_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cloudwatch_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[iscsid]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[iscsid]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[kernel]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[kernel]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mongodb_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mongodb_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_plugin_ml2]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_plugin_ml2]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_dhcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_dhcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_l3]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_l3]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_ovs_agent]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_ovs_agent]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_conductor]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_conductor]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_consoleauth]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_consoleauth]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[logrotate_crond]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[logrotate_crond]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[panko_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[panko_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_rpc]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_rpc]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_notify]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_notify]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sshd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sshd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_ringbuilder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_ringbuilder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_storage]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[timezone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[timezone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_firewall]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_packages]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_packages]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tuned]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tuned]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_compute]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_compute]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_compute]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_compute]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_migration_target]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_migration_target]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_osd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_osd]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy::Stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy::Stats]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[haproxy.stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[haproxy.stats]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy::Horizon_endpoint]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy::Horizon_endpoint]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[horizon_172.17.1.12_controller-0.internalapi.localdomain]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[horizon_172.17.1.12_controller-0.internalapi.localdomain]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[mysql-backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[mysql-backup]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Firewall::Pre]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall::Pre]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Linux]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Linux]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux/Package[iptables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux/Package[iptables]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Linux::Redhat]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Linux::Redhat]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_evaluator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_evaluator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_listener]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_listener]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_notifier]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_notifier]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ca_certs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ca_certs]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_central]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_central]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_notification]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_notification]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[certmonger_user]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[certmonger_user]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_backup]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[clustercheck]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[clustercheck]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[docker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[docker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[glance_registry_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[glance_registry_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_metricd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_metricd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cloudwatch_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cloudwatch_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[iscsid]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[iscsid]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[kernel]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[kernel]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mongodb_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mongodb_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mysql_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_plugin_ml2]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_plugin_ml2]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_conductor]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_conductor]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_consoleauth]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_consoleauth]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[logrotate_crond]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[logrotate_crond]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_rpc]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_rpc]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_notify]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_notify]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sahara_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sahara_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sahara_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sahara_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sshd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sshd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_ringbuilder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_ringbuilder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[timezone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[timezone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tripleo_firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tripleo_firewall]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tripleo_packages]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tripleo_packages]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tuned]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tuned]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 mysql_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 mysql_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[redis]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 redis_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 redis_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[haproxy-role-controller-0]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[haproxy-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-8zplcp returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-8zplcp property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Install[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Install[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy::Globals]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy::Globals]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-haproxy.stats_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-haproxy.stats_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[panko]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[panko]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-horizon_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-horizon_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-horizon_balancermember_horizon_172.17.1.12_controller-0.internalapi.localdomain]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-horizon_balancermember_horizon_172.17.1.12_controller-0.internalapi.localdomain]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-mysql_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-mysql_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-mysql_balancermember_mysql-backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-mysql_balancermember_mysql-backup]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching iptables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIptables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIptables: [instances]", > "Debug: Executing: '/usr/sbin/iptables-save'", > "Debug: Prefetching ip6tables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [instances]", > "Debug: Executing: '/usr/sbin/ip6tables-save'", > "Debug: Class[Tripleo::Firewall::Post]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall::Post]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[998 log all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[998 log all]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[999 drop all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[999 drop all]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[119 cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[119 cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[127 horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[127 horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[111 keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[111 keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[121 memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[121 memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[105 ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[105 ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[132 sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[132 sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[124 snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[124 snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): Inserting rule 100 mysql_haproxy ipv4", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 3306 -m state --state NEW -j ACCEPT -m comment --comment 100 mysql_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/iptables.init save'", > "Debug: /Firewall[100 mysql_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 mysql_haproxy] will propagate my refresh event", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): Inserting rule 100 mysql_haproxy ipv6", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 3306 -m state --state NEW -j ACCEPT -m comment --comment 100 mysql_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/ip6tables.init save'", > "Debug: /Firewall[100 mysql_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 mysql_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 mysql_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 mysql_haproxy]", > "Debug: Concat::Fragment[haproxy-redis_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-redis_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-redis_balancermember_redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-redis_balancermember_redis]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): Inserting rule 100 redis_haproxy ipv4", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6379 -m state --state NEW -j ACCEPT -m comment --comment 100 redis_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 redis_haproxy] will propagate my refresh event", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): Inserting rule 100 redis_haproxy ipv6", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 6379 -m state --state NEW -j ACCEPT -m comment --comment 100 redis_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 redis_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 redis_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 redis_haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1afp57k returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1afp57k property show | grep haproxy-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep haproxy-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-e7htqk returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-e7htqk property set --node controller-0 haproxy-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-e7htqk diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-e7htqk.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 haproxy-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/Pcmk_property[property-controller-0-haproxy-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/Pcmk_property[property-controller-0-haproxy-role]: The container Pacemaker::Property[haproxy-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[haproxy-role-controller-0]: Unscheduling all events on Pacemaker::Property[haproxy-role-controller-0]", > "Debug: Pacemaker::Resource::Ip[control_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[control_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[public_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[public_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[redis_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[redis_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[internal_api_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[internal_api_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[storage_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[storage_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[storage_mgmt_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[storage_mgmt_vip]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/Package[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/Package[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Config[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Config[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Concat[/etc/haproxy/haproxy.cfg]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat[/etc/haproxy/haproxy.cfg]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-00-header]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-00-header]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-haproxy-base]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-haproxy-base]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_admin_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_admin_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_admin_balancermember_keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_admin_balancermember_keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): Inserting rule 100 keystone_admin_haproxy ipv4", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 35357 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_admin_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 keystone_admin_haproxy] will propagate my refresh event", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): Inserting rule 100 keystone_admin_haproxy ipv6", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 35357 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_admin_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 keystone_admin_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_admin_haproxy]", > "Debug: Concat::Fragment[haproxy-keystone_public_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_public_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_public_balancermember_keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_public_balancermember_keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): Inserting rule 100 keystone_public_haproxy ipv4", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 5000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy] will propagate my refresh event", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): Inserting rule 100 keystone_public_haproxy ipv6", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 5000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_public_haproxy]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 keystone_public_haproxy_ssl ipv4", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 keystone_public_haproxy_ssl ipv6", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 13000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-neutron_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-neutron_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-neutron_balancermember_neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-neutron_balancermember_neutron]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): Inserting rule 100 neutron_haproxy ipv4", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 9696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 neutron_haproxy] will propagate my refresh event", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): Inserting rule 100 neutron_haproxy ipv6", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 9696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 neutron_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 neutron_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 neutron_haproxy]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 neutron_haproxy_ssl ipv4", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 13696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 neutron_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 neutron_haproxy_ssl ipv6", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 13696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 neutron_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-cinder_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-cinder_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-cinder_balancermember_cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-cinder_balancermember_cinder]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): Inserting rule 100 cinder_haproxy ipv4", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 cinder_haproxy] will propagate my refresh event", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): Inserting rule 100 cinder_haproxy ipv6", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 cinder_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 cinder_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 cinder_haproxy]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 cinder_haproxy_ssl ipv4", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 13776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 cinder_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 cinder_haproxy_ssl ipv6", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 cinder_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-sahara_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-sahara_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-sahara_balancermember_sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-sahara_balancermember_sahara]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): Inserting rule 100 sahara_haproxy ipv4", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 8386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 sahara_haproxy] will propagate my refresh event", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): Inserting rule 100 sahara_haproxy ipv6", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 sahara_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 sahara_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 sahara_haproxy]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 sahara_haproxy_ssl ipv4", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 13386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 sahara_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 sahara_haproxy_ssl ipv6", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 sahara_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-glance_api_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-glance_api_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-glance_api_balancermember_glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-glance_api_balancermember_glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): Inserting rule 100 glance_api_haproxy ipv4", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 9292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy] will propagate my refresh event", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): Inserting rule 100 glance_api_haproxy ipv6", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 9292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 glance_api_haproxy]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 glance_api_haproxy_ssl ipv4", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 13292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 glance_api_haproxy_ssl ipv6", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 13292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_osapi_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_osapi_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_osapi_balancermember_nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_osapi_balancermember_nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): Inserting rule 100 nova_osapi_haproxy ipv4", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_osapi_haproxy ipv6", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_osapi_haproxy]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_osapi_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_osapi_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_placement_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_placement_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_placement_balancermember_nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_placement_balancermember_nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): Inserting rule 100 nova_placement_haproxy ipv4", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 8778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_placement_haproxy ipv6", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 8778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_placement_haproxy]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_placement_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 13778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_placement_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 19 --wait -t filter -p tcp -m multiport --dports 13778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_metadata_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_metadata_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_metadata_balancermember_nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_metadata_balancermember_nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): Inserting rule 100 nova_metadata_haproxy ipv4", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8775 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_metadata_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_metadata_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_metadata_haproxy ipv6", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8775 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_metadata_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_metadata_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_metadata_haproxy]", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_balancermember_nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_balancermember_nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): Inserting rule 100 nova_novncproxy_haproxy ipv4", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 6080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_novncproxy_haproxy ipv6", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 6080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_novncproxy_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_novncproxy_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 13080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-aodh_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-aodh_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-aodh_balancermember_aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-aodh_balancermember_aodh]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): Inserting rule 100 aodh_haproxy ipv4", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 aodh_haproxy] will propagate my refresh event", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): Inserting rule 100 aodh_haproxy ipv6", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 aodh_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 aodh_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 aodh_haproxy]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 aodh_haproxy_ssl ipv4", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 13042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 aodh_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 aodh_haproxy_ssl ipv6", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 aodh_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-panko_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-panko_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-panko_balancermember_panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-panko_balancermember_panko]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): Inserting rule 100 panko_haproxy ipv4", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 24 --wait -t filter -p tcp -m multiport --dports 8977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 panko_haproxy] will propagate my refresh event", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): Inserting rule 100 panko_haproxy ipv6", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 panko_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 panko_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 panko_haproxy]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 panko_haproxy_ssl ipv4", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 13977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 panko_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 panko_haproxy_ssl ipv6", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 26 --wait -t filter -p tcp -m multiport --dports 13977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 panko_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 panko_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-gnocchi_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-gnocchi_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-gnocchi_balancermember_gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-gnocchi_balancermember_gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): Inserting rule 100 gnocchi_haproxy ipv4", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy] will propagate my refresh event", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): Inserting rule 100 gnocchi_haproxy ipv6", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 gnocchi_haproxy]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 gnocchi_haproxy_ssl ipv4", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 13041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 gnocchi_haproxy_ssl ipv6", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 13041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_balancermember_swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_balancermember_swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): Inserting rule 100 swift_proxy_server_haproxy ipv4", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8080 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy] will propagate my refresh event", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): Inserting rule 100 swift_proxy_server_haproxy ipv6", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 32 --wait -t filter -p tcp -m multiport --dports 8080 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 swift_proxy_server_haproxy_ssl ipv4", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 32 --wait -t filter -p tcp -m multiport --dports 13808 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 swift_proxy_server_haproxy_ssl ipv6", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 33 --wait -t filter -p tcp -m multiport --dports 13808 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-heat_api_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_api_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-heat_api_balancermember_heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_api_balancermember_heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): Inserting rule 100 heat_api_haproxy ipv4", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy] will propagate my refresh event", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): Inserting rule 100 heat_api_haproxy ipv6", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 8004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_api_haproxy]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 heat_api_haproxy_ssl ipv4", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 13004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 heat_api_haproxy_ssl ipv6", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 13004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-heat_cfn_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_cfn_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-heat_cfn_balancermember_heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_cfn_balancermember_heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): Inserting rule 100 heat_cfn_haproxy ipv4", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8000 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy] will propagate my refresh event", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): Inserting rule 100 heat_cfn_haproxy ipv6", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8000 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_cfn_haproxy]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 heat_cfn_haproxy_ssl ipv4", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13005 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 heat_cfn_haproxy_ssl ipv6", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13005 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v4_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v4_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v6_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v6_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1yjvcao returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1yjvcao constraint list | grep location-ip-192.168.24.12 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1fihzjf returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1fihzjf resource show ip-192.168.24.12 > /dev/null 2>&1", > "Debug: Exists: resource ip-192.168.24.12 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1oq1anq returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1oq1anq resource create ip-192.168.24.12 IPaddr2 ip=192.168.24.12 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1oq1anq diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1oq1anq.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-192.168.24.12 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-192.168.24.12 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-11526rm returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-11526rm constraint location ip-192.168.24.12 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-11526rm diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-11526rm.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-l3wmfz returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-l3wmfz resource enable ip-192.168.24.12", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-l3wmfz diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-l3wmfz.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/Pcmk_resource[ip-192.168.24.12]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/Pcmk_resource[ip-192.168.24.12]: The container Pacemaker::Resource::Ip[control_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[control_vip]: Unscheduling all events on Pacemaker::Resource::Ip[control_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-68ec8b returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-68ec8b constraint list | grep location-ip-10.0.0.110 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-xf19fb returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-xf19fb resource show ip-10.0.0.110 > /dev/null 2>&1", > "Debug: Exists: resource ip-10.0.0.110 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1jfxsgy returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1jfxsgy resource create ip-10.0.0.110 IPaddr2 ip=10.0.0.110 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1jfxsgy diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1jfxsgy.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-10.0.0.110 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-10.0.0.110 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nuad4a returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nuad4a constraint location ip-10.0.0.110 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nuad4a diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nuad4a.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ekdywy returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ekdywy resource enable ip-10.0.0.110", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ekdywy diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ekdywy.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/Pcmk_resource[ip-10.0.0.110]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/Pcmk_resource[ip-10.0.0.110]: The container Pacemaker::Resource::Ip[public_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[public_vip]: Unscheduling all events on Pacemaker::Resource::Ip[public_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1huyxsz returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1huyxsz constraint list | grep location-ip-172.17.1.16 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nceeyk returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-nceeyk resource show ip-172.17.1.16 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.1.16 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-rq1seh returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-rq1seh resource create ip-172.17.1.16 IPaddr2 ip=172.17.1.16 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-rq1seh diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-rq1seh.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.1.16 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.1.16 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1itqpqz returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1itqpqz constraint location ip-172.17.1.16 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1itqpqz diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1itqpqz.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-155hmnp returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-155hmnp resource enable ip-172.17.1.16", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-155hmnp diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-155hmnp.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/Pcmk_resource[ip-172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/Pcmk_resource[ip-172.17.1.16]: The container Pacemaker::Resource::Ip[redis_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[redis_vip]: Unscheduling all events on Pacemaker::Resource::Ip[redis_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1irh6co returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1irh6co constraint list | grep location-ip-172.17.1.15 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-psi9lh returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-psi9lh resource show ip-172.17.1.15 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.1.15 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-91svhd returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-91svhd resource create ip-172.17.1.15 IPaddr2 ip=172.17.1.15 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-91svhd diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-91svhd.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.1.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.1.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-augeb9 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-augeb9 constraint location ip-172.17.1.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-augeb9 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-augeb9.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-mp9f76 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-mp9f76 resource enable ip-172.17.1.15", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-mp9f76 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-mp9f76.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/Pcmk_resource[ip-172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/Pcmk_resource[ip-172.17.1.15]: The container Pacemaker::Resource::Ip[internal_api_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[internal_api_vip]: Unscheduling all events on Pacemaker::Resource::Ip[internal_api_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-14ow56r returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-14ow56r constraint list | grep location-ip-172.17.3.18 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-gbcvu5 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-gbcvu5 resource show ip-172.17.3.18 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.3.18 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-x5yhbg returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-x5yhbg resource create ip-172.17.3.18 IPaddr2 ip=172.17.3.18 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-x5yhbg diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-x5yhbg.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.3.18 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.3.18 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-12p5yij returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-12p5yij constraint location ip-172.17.3.18 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-12p5yij diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-12p5yij.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hu11s7 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hu11s7 resource enable ip-172.17.3.18", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hu11s7 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hu11s7.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/Pcmk_resource[ip-172.17.3.18]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/Pcmk_resource[ip-172.17.3.18]: The container Pacemaker::Resource::Ip[storage_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[storage_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-uq3shz returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-uq3shz constraint list | grep location-ip-172.17.4.11 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-18ocdon returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-18ocdon resource show ip-172.17.4.11 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.4.11 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-9nry15 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-9nry15 resource create ip-172.17.4.11 IPaddr2 ip=172.17.4.11 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-9nry15 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-9nry15.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.4.11 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.4.11 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1dtktub returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1dtktub constraint location ip-172.17.4.11 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1dtktub diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1dtktub.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-l61d2n returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-l61d2n resource enable ip-172.17.4.11", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-l61d2n diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-l61d2n.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.17.4.11]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.17.4.11]: The container Pacemaker::Resource::Ip[storage_mgmt_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[storage_mgmt_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_mgmt_vip]", > "Debug: Pacemaker::Resource::Bundle[haproxy-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Bundle[haproxy-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1nx34lr returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1nx34lr constraint list | grep location-haproxy-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-qjnssp returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-qjnssp resource show haproxy-bundle > /dev/null 2>&1", > "Debug: Exists: bundle haproxy-bundle exists 1 location exists 1 deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-166y7el returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-166y7el resource bundle create haproxy-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest replicas=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=haproxy-cfg-files source-dir=/var/lib/kolla/config_files/haproxy.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=haproxy-cfg-data source-dir=/var/lib/config-data/puppet-generated/haproxy/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=haproxy-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=haproxy-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=haproxy-var-lib source-dir=/var/lib/haproxy target-dir=/var/lib/haproxy options=rw storage-map id=haproxy-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=haproxy-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=haproxy-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=haproxy-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=haproxy-dev-log source-dir=/dev/log target-dir=/dev/log options=rw --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-166y7el diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-166y7el.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1xccs7h returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1xccs7h constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1xccs7h diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1xccs7h.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-44fn3v returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-44fn3v resource enable haproxy-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-44fn3v diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-44fn3v.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/Pcmk_bundle[haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/Pcmk_bundle[haproxy-bundle]: The container Pacemaker::Resource::Bundle[haproxy-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[haproxy-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-14ffp5h returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-14ffp5h constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1smkp2t returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1smkp2t constraint order start ip-192.168.24.12 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1smkp2t diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1smkp2t.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/Pcmk_constraint[order-ip-192.168.24.12-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/Pcmk_constraint[order-ip-192.168.24.12-haproxy-bundle]: The container Pacemaker::Constraint::Order[control_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[control_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-yxyl99 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-yxyl99 constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1t93o5e returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1t93o5e constraint colocation add ip-192.168.24.12 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1t93o5e diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1t93o5e.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Colocation[control_vip-with-haproxy]/Pcmk_constraint[colo-ip-192.168.24.12-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Colocation[control_vip-with-haproxy]/Pcmk_constraint[colo-ip-192.168.24.12-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[control_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[control_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-hn430o returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-hn430o constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-rgubk returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-rgubk constraint order start ip-10.0.0.110 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-rgubk diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-rgubk.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/Pcmk_constraint[order-ip-10.0.0.110-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/Pcmk_constraint[order-ip-10.0.0.110-haproxy-bundle]: The container Pacemaker::Constraint::Order[public_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[public_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1kyj40x returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1kyj40x constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-138d9x9 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-138d9x9 constraint colocation add ip-10.0.0.110 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-138d9x9 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-138d9x9.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Colocation[public_vip-with-haproxy]/Pcmk_constraint[colo-ip-10.0.0.110-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Colocation[public_vip-with-haproxy]/Pcmk_constraint[colo-ip-10.0.0.110-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[public_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[public_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1fkr21u returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1fkr21u constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1bv76qc returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1bv76qc constraint order start ip-172.17.1.16 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1bv76qc diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1bv76qc.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.16-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.16-haproxy-bundle]: The container Pacemaker::Constraint::Order[redis_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[redis_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ba1i3q returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ba1i3q constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-aq6c2p returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-aq6c2p constraint colocation add ip-172.17.1.16 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-aq6c2p diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-aq6c2p.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.16-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.16-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[redis_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ta5mpn returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ta5mpn constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-yll8no returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-yll8no constraint order start ip-172.17.1.15 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-yll8no diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-yll8no.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.15-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.15-haproxy-bundle]: The container Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hj0o7w returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hj0o7w constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hwy06u returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hwy06u constraint colocation add ip-172.17.1.15 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hwy06u diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1hwy06u.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.15-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.15-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1lqmpt3 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1lqmpt3 constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-2wuygu returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-2wuygu constraint order start ip-172.17.3.18 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-2wuygu diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-2wuygu.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.3.18-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.3.18-haproxy-bundle]: The container Pacemaker::Constraint::Order[storage_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[storage_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1seo7md returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1seo7md constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10268l8 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10268l8 constraint colocation add ip-172.17.3.18 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10268l8 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10268l8.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.3.18-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.3.18-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[storage_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ls28n1 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-ls28n1 constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ag28nc returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ag28nc constraint order start ip-172.17.4.11 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ag28nc diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-1ag28nc.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.4.11-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.4.11-haproxy-bundle]: The container Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10qxwfe returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10qxwfe constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10sb6sj returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10sb6sj constraint colocation add ip-172.17.4.11 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10sb6sj diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180625-8-10sb6sj.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.4.11-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.4.11-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]", > "Info: Computing checksum on file /etc/haproxy/haproxy.cfg", > "Info: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Filebucketed /etc/haproxy/haproxy.cfg to puppet with sum 1f337186b0e1ba5ee82760cb437fb810", > "Debug: Executing: '/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg20180625-8-s43fps -c'", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Configuration file is valid", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}80fa14cff3b060e06166a1de84ae95e8'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: The container Concat[/etc/haproxy/haproxy.cfg] will propagate my refresh event", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: The container /etc/haproxy/haproxy.cfg will propagate my refresh event", > "Debug: /etc/haproxy/haproxy.cfg: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /etc/haproxy/haproxy.cfg: Resource is being skipped, unscheduling all events", > "Info: /etc/haproxy/haproxy.cfg: Unscheduling all events on /etc/haproxy/haproxy.cfg", > "Info: Concat[/etc/haproxy/haproxy.cfg]: Unscheduling all events on Concat[/etc/haproxy/haproxy.cfg]", > "Debug: Haproxy::Service[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Service[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::end]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Finishing transaction 24960820", > "Debug: Stored state in 0.10 seconds", > "Notice: Applied catalog in 156.00 seconds", > " Total: 90", > " Success: 90", > " Total: 251", > " Skipped: 36", > " Out of sync: 89", > " Changed: 89", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File: 0.06", > " Pcmk bundle: 10.66", > " Last run: 1529921766", > " Total: 159.59", > " Firewall: 22.21", > " Pcmk constraint: 38.44", > " Pcmk property: 5.02", > " Config retrieval: 5.60", > " Pcmk resource: 77.60", > " Config: 1529921605", > "Debug: Finishing transaction 42840560", > "+ TAGS=file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", > "+ CONFIG='include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation -e 'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle'", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications." > ] >} >2018-06-25 06:16:23,675 p=25239 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks2.json exists] ******** >2018-06-25 06:16:24,139 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:16:24,145 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:16:24,147 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:16:24,183 p=25239 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 2] ******************** >2018-06-25 06:16:24,219 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:24,246 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:24,258 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:24,282 p=25239 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 2] *** >2018-06-25 06:16:24,312 p=25239 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,341 p=25239 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,356 p=25239 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,363 p=25239 u=mistral | PLAY [External deployment step 3] ********************************************** >2018-06-25 06:16:24,384 p=25239 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-25 06:16:24,465 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,517 p=25239 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-25 06:16:24,545 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,550 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,555 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,574 p=25239 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-25 06:16:24,593 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,609 p=25239 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-25 06:16:24,635 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,651 p=25239 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-25 06:16:24,671 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,688 p=25239 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-25 06:16:24,707 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,726 p=25239 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-25 06:16:24,743 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,761 p=25239 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-25 06:16:24,780 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,797 p=25239 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-25 06:16:24,820 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,840 p=25239 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-25 06:16:24,859 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,876 p=25239 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-25 06:16:24,896 p=25239 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,915 p=25239 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-25 06:16:24,935 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,952 p=25239 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-25 06:16:24,969 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:24,987 p=25239 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-25 06:16:25,005 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,023 p=25239 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-25 06:16:25,040 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,056 p=25239 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-25 06:16:25,073 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,091 p=25239 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-25 06:16:25,112 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,133 p=25239 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-25 06:16:25,151 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,170 p=25239 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-25 06:16:25,189 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,194 p=25239 u=mistral | PLAY [Overcloud deploy step tasks for 3] *************************************** >2018-06-25 06:16:25,220 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:16:25,252 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,278 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,291 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,314 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:16:25,345 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,380 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,392 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,420 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:16:25,460 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,487 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,498 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,521 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:16:25,551 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,578 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,590 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,613 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:16:25,644 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,669 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,684 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,690 p=25239 u=mistral | PLAY [Overcloud common deploy step tasks 3] ************************************ >2018-06-25 06:16:25,714 p=25239 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-25 06:16:25,746 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,771 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,784 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,805 p=25239 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-25 06:16:25,833 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,859 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,872 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,894 p=25239 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-25 06:16:25,922 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,947 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,960 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:25,981 p=25239 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-25 06:16:26,008 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,038 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,051 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,071 p=25239 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-25 06:16:26,101 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,126 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,139 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,160 p=25239 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-25 06:16:26,190 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,215 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,229 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,252 p=25239 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-25 06:16:26,314 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,316 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,317 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,319 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,324 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,325 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,326 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,353 p=25239 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-25 06:16:26,419 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,420 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,420 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,420 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,421 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,421 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,423 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,428 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,431 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,431 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,432 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,433 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,440 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,445 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,449 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,452 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,458 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,462 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,484 p=25239 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-25 06:16:26,513 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,539 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,552 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:26,574 p=25239 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-25 06:16:26,603 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,629 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,651 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,674 p=25239 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-25 06:16:26,734 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,746 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,751 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,752 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,753 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,760 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,764 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,767 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB', u'DB_ROOT_PASSWORD=ufdBL6tH5c'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB", "DB_ROOT_PASSWORD=ufdBL6tH5c"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,782 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'fLWtJZCynkwHz2bnZopp1aRC2'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "fLWtJZCynkwHz2bnZopp1aRC2"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,808 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,826 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,829 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,829 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,830 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,831 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,832 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,859 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,872 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:26,948 p=25239 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-25 06:16:26,980 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,013 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,029 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,055 p=25239 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-25 06:16:27,143 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,187 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,194 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,196 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,201 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,204 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,210 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,216 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,220 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,290 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,293 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,296 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,300 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,305 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,309 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,313 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,317 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,322 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,326 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,330 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,336 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,340 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,344 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,348 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,354 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,358 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,361 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,367 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,370 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,373 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,379 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,382 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,388 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,392 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,398 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,402 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,407 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,410 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,415 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,419 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,424 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,428 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,435 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,438 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,444 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,447 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,452 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,456 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,461 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,465 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,469 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,473 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,478 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,483 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,487 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,490 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,496 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,500 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,506 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,510 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,516 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,520 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,523 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,528 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,540 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,549 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,551 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,552 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,554 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,557 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,562 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,566 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,571 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,576 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,617 p=25239 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-25 06:16:27,631 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:16:27,658 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:16:27,684 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:16:27,748 p=25239 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-25 06:16:27,802 p=25239 u=mistral | skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,843 p=25239 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-25 06:16:27,903 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,905 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,917 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:16:27,938 p=25239 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-25 06:16:28,716 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921787.98-229027596797823/source", "state": "file", "uid": 0} >2018-06-25 06:16:28,730 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921788.04-218482608528793/source", "state": "file", "uid": 0} >2018-06-25 06:16:28,741 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529921788.02-108658727181751/source", "state": "file", "uid": 0} >2018-06-25 06:16:28,766 p=25239 u=mistral | TASK [Run puppet host configuration for step 3] ******************************** >2018-06-25 06:16:38,150 p=25239 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:16:38,391 p=25239 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:16:42,359 p=25239 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:16:42,383 p=25239 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 3] *** >2018-06-25 06:16:42,450 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.10 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seltype: seltype changed 'etc_t' to 'system_conf_t'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seltype: seltype changed 'etc_t' to 'system_conf_t'", > "Notice: Applied catalog in 3.53 seconds", > "Changes:", > " Total: 4", > "Events:", > " Success: 4", > "Resources:", > " Total: 217", > " Corrective change: 3", > " Out of sync: 4", > " Changed: 4", > "Time:", > " Concat file: 0.00", > " File line: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.00", > " Augeas: 0.02", > " Firewall: 0.02", > " File: 0.13", > " Service: 0.22", > " Pcmk property: 0.36", > " Package: 0.38", > " Pcmk resource default: 0.39", > " Exec: 0.91", > " Last run: 1529921802", > " Config retrieval: 3.60", > " Total: 6.03", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1529921795", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:16:42,472 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.77 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.26 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 141", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.05", > " Service: 0.12", > " Exec: 0.26", > " Package: 0.29", > " Last run: 1529921797", > " Config retrieval: 2.06", > " Total: 2.81", > " Filebucket: 0.00", > "Version:", > " Config: 1529921794", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:16:42,499 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.95 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.18 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 135", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.00", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.09", > " Service: 0.11", > " Package: 0.22", > " Exec: 0.24", > " Last run: 1529921798", > " Config retrieval: 2.24", > " Total: 2.94", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1529921794", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:16:42,523 p=25239 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 3] ***************** >2018-06-25 06:16:42,554 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:42,582 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:42,593 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:16:42,620 p=25239 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 3] *** >2018-06-25 06:16:42,676 p=25239 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:16:42,677 p=25239 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:16:42,690 p=25239 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:16:42,715 p=25239 u=mistral | TASK [Start containers for step 3] ********************************************* >2018-06-25 06:16:43,450 p=25239 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:17:08,832 p=25239 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:17:58,777 p=25239 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:17:58,803 p=25239 u=mistral | TASK [Debug output for task which failed: Start containers for step 3] ********* >2018-06-25 06:17:58,924 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "333aa6b2b383: Already exists", > "61fdbbbd43a6: Pulling fs layer", > "61fdbbbd43a6: Verifying Checksum", > "61fdbbbd43a6: Download complete", > "61fdbbbd43a6: Pull complete", > "Digest: sha256:95db990608ca6e4c17f012e9517d9667fa79c8e102fdf5a2820de692b385e938", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-account ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-account", > "a98c7da29d65: Already exists", > "b85dac0937a4: Pulling fs layer", > "b85dac0937a4: Verifying Checksum", > "b85dac0937a4: Download complete", > "b85dac0937a4: Pull complete", > "Digest: sha256:8619e6534421b29808eaaad146ceac6399780459430f3c7fa490089377aa1380", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", > "stdout: ", > "stdout: d4904f88b82a043e04b57df2646b461960b5e38b935f784be2c825c43aeeb75d", > "stdout: 2018-06-25 10:16:48.739 11 WARNING oslo_config.cfg [-] Option \"db_backend\" from group \"DEFAULT\" is deprecated. Use option \"backend\" from group \"database\".", > "2018-06-25 10:16:48.823 11 INFO migrate.versioning.api [-] 70 -> 71... ", > "2018-06-25 10:16:49.017 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.017 11 INFO migrate.versioning.api [-] 71 -> 72... ", > "2018-06-25 10:16:49.057 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.058 11 INFO migrate.versioning.api [-] 72 -> 73... ", > "2018-06-25 10:16:49.234 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.234 11 INFO migrate.versioning.api [-] 73 -> 74... ", > "2018-06-25 10:16:49.241 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.241 11 INFO migrate.versioning.api [-] 74 -> 75... ", > "2018-06-25 10:16:49.247 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.248 11 INFO migrate.versioning.api [-] 75 -> 76... ", > "2018-06-25 10:16:49.254 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.254 11 INFO migrate.versioning.api [-] 76 -> 77... ", > "2018-06-25 10:16:49.260 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.260 11 INFO migrate.versioning.api [-] 77 -> 78... ", > "2018-06-25 10:16:49.266 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.266 11 INFO migrate.versioning.api [-] 78 -> 79... ", > "2018-06-25 10:16:49.355 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.355 11 INFO migrate.versioning.api [-] 79 -> 80... ", > "2018-06-25 10:16:49.447 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.448 11 INFO migrate.versioning.api [-] 80 -> 81... ", > "2018-06-25 10:16:49.454 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.454 11 INFO migrate.versioning.api [-] 81 -> 82... ", > "2018-06-25 10:16:49.459 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.460 11 INFO migrate.versioning.api [-] 82 -> 83... ", > "2018-06-25 10:16:49.466 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.466 11 INFO migrate.versioning.api [-] 83 -> 84... ", > "2018-06-25 10:16:49.472 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.472 11 INFO migrate.versioning.api [-] 84 -> 85... ", > "2018-06-25 10:16:49.478 11 INFO migrate.versioning.api [-] done", > "2018-06-25 10:16:49.479 11 INFO migrate.versioning.api [-] 85 -> 86... ", > "2018-06-25 10:16:49.530 11 INFO migrate.versioning.api [-] done", > "stdout: \u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend\u001b[0m", > "\u001b[mNotice: Compiled catalog for controller-0.localdomain in environment production in 1.38 seconds\u001b[0m", > "\u001b[0;32mInfo: Applying configuration version '1529921814'\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/external_ids: external_ids changed '' to 'bridge-id=br-ex'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]/Vs_bridge[br-isolated]/external_ids: external_ids changed '' to 'bridge-id=br-isolated'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]\u001b[0m", > "\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m", > "\u001b[mNotice: Applied catalog in 0.26 seconds\u001b[0m", > "stderr: Running in chroot, ignoring request.", > "\u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "\u001b[1;33mWarning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)\u001b[0m", > "\u001b[1;33mWarning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "\u001b[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')\u001b[0m", > "stderr: Option \"logdir\" from group \"DEFAULT\" is deprecated. Use option \"log-dir\" from group \"DEFAULT\".", > "stdout: Upgraded database to: queens_expand01, current revision(s): queens_expand01", > "Database migration is up to date. No migration needed.", > "Upgraded database to: queens_contract01, current revision(s): queens_contract01", > "Database is synced successfully.", > "stderr: + sudo -E kolla_set_configs", > "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Deleting /etc/glance/glance-api.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/glance/glance-api.conf to /etc/glance/glance-api.conf", > "INFO:__main__:Deleting /etc/glance/glance-cache.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/glance/glance-cache.conf to /etc/glance/glance-cache.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/tripleo.cnf to /etc/my.cnf.d/tripleo.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.conf to /etc/ceph/ceph.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.admin.keyring to /etc/ceph/ceph.client.admin.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mon.keyring to /etc/ceph/ceph.mon.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mgr.controller-0.keyring to /etc/ceph/ceph.mgr.controller-0.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.manila.keyring to /etc/ceph/ceph.client.manila.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.radosgw.keyring to /etc/ceph/ceph.client.radosgw.keyring", > "INFO:__main__:Writing out command to execute", > "INFO:__main__:Setting permission for /var/lib/glance", > "INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring", > "++ cat /run_command", > "+ CMD='/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf'", > "+ ARGS=", > "+ [[ ! -n '' ]]", > "+ . kolla_extend_start", > "++ [[ ! -d /var/log/kolla/glance ]]", > "++ mkdir -p /var/log/kolla/glance", > "+++ stat -c %a /var/log/kolla/glance", > "++ [[ 2755 != \\7\\5\\5 ]]", > "++ chmod 755 /var/log/kolla/glance", > "++ . /usr/local/bin/kolla_glance_extend_start", > "+++ [[ -n 0 ]]", > "+++ glance-manage db_sync", > "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1340: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade", > " expire_on_commit=expire_on_commit, _conf=conf)", > "INFO [alembic.runtime.migration] Context impl MySQLImpl.", > "INFO [alembic.runtime.migration] Will assume non-transactional DDL.", > "INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial", > "INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table", > "INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server", > "INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images", > "INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01", > "INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01", > "INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images", > "INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables", > "INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01", > "+++ glance-manage db_load_metadefs", > "+++ exit 0", > "stdout: '/swift_ringbuilder/etc/swift/account.ring.gz' -> '/etc/swift/account.ring.gz'", > "'/swift_ringbuilder/etc/swift/container.ring.gz' -> '/etc/swift/container.ring.gz'", > "'/swift_ringbuilder/etc/swift/object.ring.gz' -> '/etc/swift/object.ring.gz'", > "'/swift_ringbuilder/etc/swift/account.builder' -> '/etc/swift/account.builder'", > "'/swift_ringbuilder/etc/swift/container.builder' -> '/etc/swift/container.builder'", > "'/swift_ringbuilder/etc/swift/object.builder' -> '/etc/swift/object.builder'", > "'/swift_ringbuilder/etc/swift/backups' -> '/etc/swift/backups'", > "'/swift_ringbuilder/etc/swift/backups/1529920917.account.builder' -> '/etc/swift/backups/1529920917.account.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529920917.object.builder' -> '/etc/swift/backups/1529920917.object.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529920918.container.builder' -> '/etc/swift/backups/1529920918.container.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529920920.account.builder' -> '/etc/swift/backups/1529920920.account.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529920920.account.ring.gz' -> '/etc/swift/backups/1529920920.account.ring.gz'", > "'/swift_ringbuilder/etc/swift/backups/1529920920.container.builder' -> '/etc/swift/backups/1529920920.container.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529920920.container.ring.gz' -> '/etc/swift/backups/1529920920.container.ring.gz'", > "'/swift_ringbuilder/etc/swift/backups/1529920920.object.builder' -> '/etc/swift/backups/1529920920.object.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529920920.object.ring.gz' -> '/etc/swift/backups/1529920920.object.ring.gz'", > "stderr: INFO [alembic.runtime.migration] Context impl MySQLImpl.", > "INFO [alembic.runtime.migration] Running upgrade -> 001, Icehouse release", > "INFO [alembic.runtime.migration] Running upgrade 001 -> 002, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 002 -> 003, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 003 -> 004, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 004 -> 005, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 005 -> 006, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 006 -> 007, convert clusters.status_description to LongText", > "INFO [alembic.runtime.migration] Running upgrade 007 -> 008, add security_groups field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 008 -> 009, add rollback info to cluster", > "INFO [alembic.runtime.migration] Running upgrade 009 -> 010, add auto_security_groups flag to node group", > "INFO [alembic.runtime.migration] Running upgrade 010 -> 011, add Sahara settings info to cluster", > "INFO [alembic.runtime.migration] Running upgrade 011 -> 012, add availability_zone field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 012 -> 013, add volumes_availability_zone field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 013 -> 014, add_volume_type", > "INFO [alembic.runtime.migration] Running upgrade 014 -> 015, add_events_objects", > "INFO [alembic.runtime.migration] Running upgrade 015 -> 016, Add is_proxy_gateway", > "INFO [alembic.runtime.migration] Running upgrade 016 -> 017, drop progress in JobExecution", > "INFO [alembic.runtime.migration] Running upgrade 017 -> 018, add volume_local_to_instance flag", > "INFO [alembic.runtime.migration] Running upgrade 018 -> 019, Add is_default field for cluster and node_group templates", > "INFO [alembic.runtime.migration] Running upgrade 019 -> 020, remove redandunt progress ops", > "INFO [alembic.runtime.migration] Running upgrade 020 -> 021, Add data_source_urls to job_executions to support placeholders", > "INFO [alembic.runtime.migration] Running upgrade 021 -> 022, add_job_interface", > "INFO [alembic.runtime.migration] Running upgrade 022 -> 023, add_use_autoconfig", > "INFO [alembic.runtime.migration] Running upgrade 023 -> 024, manila_shares", > "INFO [alembic.runtime.migration] Running upgrade 024 -> 025, Increase internal_ip and management_ip column size to work with IPv6", > "INFO [alembic.runtime.migration] Running upgrade 025 -> 026, add is_public and is_protected flags", > "INFO [alembic.runtime.migration] Running upgrade 026 -> 027, Rename oozie_job_id", > "INFO [alembic.runtime.migration] Running upgrade 027 -> 028, add_storage_devices_number", > "INFO [alembic.runtime.migration] Running upgrade 028 -> 029, set is_protected on is_default", > "INFO [alembic.runtime.migration] Running upgrade 029 -> 030, health-check", > "INFO [alembic.runtime.migration] Running upgrade 030 -> 031, added_plugins_table", > "INFO [alembic.runtime.migration] Running upgrade 031 -> 032, 032_add_domain_name", > "INFO [alembic.runtime.migration] Running upgrade 032 -> 033, 033_add anti_affinity_ratio field to cluster", > "stdout: 1536fb3fe3749937bf4788b7f432cc39a55de7e7cd83ab7c3d743663113ee646", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-keystone_wsgi_admin.conf to /etc/httpd/conf.d/10-keystone_wsgi_admin.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-keystone_wsgi_main.conf to /etc/httpd/conf.d/10-keystone_wsgi_main.conf", > "INFO:__main__:Deleting /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/ssl.conf to /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/access_compat.load to /etc/httpd/conf.modules.d/access_compat.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/actions.load to /etc/httpd/conf.modules.d/actions.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.conf to /etc/httpd/conf.modules.d/alias.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.load to /etc/httpd/conf.modules.d/alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_basic.load to /etc/httpd/conf.modules.d/auth_basic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_digest.load to /etc/httpd/conf.modules.d/auth_digest.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_anon.load to /etc/httpd/conf.modules.d/authn_anon.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_core.load to /etc/httpd/conf.modules.d/authn_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_dbm.load to /etc/httpd/conf.modules.d/authn_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_file.load to /etc/httpd/conf.modules.d/authn_file.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_core.load to /etc/httpd/conf.modules.d/authz_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_dbm.load to /etc/httpd/conf.modules.d/authz_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_groupfile.load to /etc/httpd/conf.modules.d/authz_groupfile.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_host.load to /etc/httpd/conf.modules.d/authz_host.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_owner.load to /etc/httpd/conf.modules.d/authz_owner.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_user.load to /etc/httpd/conf.modules.d/authz_user.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.conf to /etc/httpd/conf.modules.d/autoindex.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.load to /etc/httpd/conf.modules.d/autoindex.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cache.load to /etc/httpd/conf.modules.d/cache.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cgi.load to /etc/httpd/conf.modules.d/cgi.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav.load to /etc/httpd/conf.modules.d/dav.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.conf to /etc/httpd/conf.modules.d/dav_fs.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.load to /etc/httpd/conf.modules.d/dav_fs.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.conf to /etc/httpd/conf.modules.d/deflate.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.load to /etc/httpd/conf.modules.d/deflate.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.conf to /etc/httpd/conf.modules.d/dir.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.load to /etc/httpd/conf.modules.d/dir.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/env.load to /etc/httpd/conf.modules.d/env.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/expires.load to /etc/httpd/conf.modules.d/expires.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ext_filter.load to /etc/httpd/conf.modules.d/ext_filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/filter.load to /etc/httpd/conf.modules.d/filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/include.load to /etc/httpd/conf.modules.d/include.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/log_config.load to /etc/httpd/conf.modules.d/log_config.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/logio.load to /etc/httpd/conf.modules.d/logio.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.conf to /etc/httpd/conf.modules.d/mime.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.load to /etc/httpd/conf.modules.d/mime.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.conf to /etc/httpd/conf.modules.d/mime_magic.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.load to /etc/httpd/conf.modules.d/mime_magic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.conf to /etc/httpd/conf.modules.d/negotiation.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.load to /etc/httpd/conf.modules.d/negotiation.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.conf to /etc/httpd/conf.modules.d/prefork.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.load to /etc/httpd/conf.modules.d/prefork.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/rewrite.load to /etc/httpd/conf.modules.d/rewrite.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.conf to /etc/httpd/conf.modules.d/setenvif.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.load to /etc/httpd/conf.modules.d/setenvif.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/socache_shmcb.load to /etc/httpd/conf.modules.d/socache_shmcb.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/speling.load to /etc/httpd/conf.modules.d/speling.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ssl.load to /etc/httpd/conf.modules.d/ssl.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.conf to /etc/httpd/conf.modules.d/status.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.load to /etc/httpd/conf.modules.d/status.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/substitute.load to /etc/httpd/conf.modules.d/substitute.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/suexec.load to /etc/httpd/conf.modules.d/suexec.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/systemd.load to /etc/httpd/conf.modules.d/systemd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/unixd.load to /etc/httpd/conf.modules.d/unixd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/usertrack.load to /etc/httpd/conf.modules.d/usertrack.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/version.load to /etc/httpd/conf.modules.d/version.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/vhost_alias.load to /etc/httpd/conf.modules.d/vhost_alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.conf to /etc/httpd/conf.modules.d/wsgi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.load to /etc/httpd/conf.modules.d/wsgi.load", > "INFO:__main__:Deleting /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/httpd.conf to /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/ports.conf to /etc/httpd/conf/ports.conf", > "INFO:__main__:Creating directory /etc/keystone/credential-keys", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/credential-keys/0 to /etc/keystone/credential-keys/0", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/credential-keys/1 to /etc/keystone/credential-keys/1", > "INFO:__main__:Creating directory /etc/keystone/fernet-keys", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/fernet-keys/0 to /etc/keystone/fernet-keys/0", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/fernet-keys/1 to /etc/keystone/fernet-keys/1", > "INFO:__main__:Deleting /etc/keystone/keystone.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/keystone.conf to /etc/keystone/keystone.conf", > "INFO:__main__:Creating directory /etc/systemd/system/httpd.service.d", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/systemd/system/httpd.service.d/httpd.conf to /etc/systemd/system/httpd.service.d/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/spool/cron/keystone to /var/spool/cron/keystone", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/keystone/keystone-admin to /var/www/cgi-bin/keystone/keystone-admin", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/keystone/keystone-public to /var/www/cgi-bin/keystone/keystone-public", > "+ CMD='/usr/sbin/httpd -DFOREGROUND'", > "++ [[ rhel =~ debian|ubuntu ]]", > "++ rm -rf /var/run/httpd/htcacheclean /run/httpd/htcacheclean '/tmp/httpd*'", > "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", > "++ [[ ! -d /var/log/kolla/keystone ]]", > "++ mkdir -p /var/log/kolla/keystone", > "+++ stat -c %U:%G /var/log/kolla/keystone", > "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", > "++ chown keystone:kolla /var/log/kolla/keystone", > "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", > "++ touch /var/log/kolla/keystone/keystone.log", > "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", > "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", > "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", > "+++ stat -c %a /var/log/kolla/keystone", > "++ chmod 755 /var/log/kolla/keystone", > "++ EXTRA_KEYSTONE_MANAGE_ARGS=", > "++ [[ -n '' ]]", > "++ [[ -n 0 ]]", > "++ sudo -H -u keystone keystone-manage db_sync", > "++ exit 0", > "stdout: b4b94dcac00ccf7f93eb58fb86b96b5ae845f535efc1b830007413e32384a423", > "stdout: Running upgrade for neutron ...", > "OK", > "Running upgrade for neutron-fwaas ...", > "Running upgrade for neutron-lbaas ...", > "Running upgrade for vmware-nsx ...", > "INFO [alembic.runtime.migration] Running upgrade -> kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225", > "INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151", > "INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf", > "INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee", > "INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f", > "INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773", > "INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592", > "INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7", > "INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79", > "INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051", > "INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136", > "INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59", > "INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d", > "INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a", > "INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25", > "INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee", > "INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9", > "INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4", > "INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664", > "INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5", > "INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f", > "INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821", > "INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4", > "INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81", > "INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6", > "INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532", > "INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f", > "INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a", > "INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b", > "INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73", > "INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502", > "INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee", > "INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048", > "INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99", > "INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada", > "INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016", > "INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3", > "INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d", > "INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d", > "INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297", > "INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c", > "INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39", > "INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b", > "INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050", > "INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9", > "INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada", > "INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc", > "INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53", > "INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70", > "INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37", > "INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa", > "INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf", > "INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4", > "INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e", > "INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90", > "INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4", > "INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426", > "INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524", > "INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc", > "INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d", > "INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70", > "INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c", > "INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c", > "INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da", > "INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192", > "INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9", > "INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6", > "INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f", > "INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee", > "INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c", > "INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a", > "INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad", > "INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab", > "INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0", > "INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62", > "INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353", > "INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586", > "INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d", > "INFO [alembic.runtime.migration] Running upgrade -> start_neutron_fwaas, start neutron-fwaas chain", > "INFO [alembic.runtime.migration] Running upgrade start_neutron_fwaas -> 4202e3047e47, add_index_tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 4202e3047e47 -> 540142f314f4, FWaaS router insertion", > "INFO [alembic.runtime.migration] Running upgrade 540142f314f4 -> 796c68dffbb, cisco_csr_fwaas", > "INFO [alembic.runtime.migration] Running upgrade 796c68dffbb -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> c40fbb377ad, Initial Liberty no-op script.", > "INFO [alembic.runtime.migration] Running upgrade c40fbb377ad -> 4b47ea298795, add reject rule", > "INFO [alembic.runtime.migration] Running upgrade 4b47ea298795 -> d6a12e637e28, neutron-fwaas v2.0", > "INFO [alembic.runtime.migration] Running upgrade d6a12e637e28 -> 876782258a43, create_default_firewall_groups_table", > "INFO [alembic.runtime.migration] Running upgrade 876782258a43 -> f24e0d5e5bff, uniq_firewallgroupportassociation0port", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 67c8e8d61d5, Initial Liberty no-op script.", > "INFO [alembic.runtime.migration] Running upgrade 67c8e8d61d5 -> 458aa42b14b, fw_table_alter script to make <name> column case sensitive", > "INFO [alembic.runtime.migration] Running upgrade 458aa42b14b -> f83a0b2964d0, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade f83a0b2964d0 -> fd38cd995cc0, change shared attribute for firewall resource", > "INFO [alembic.runtime.migration] Running upgrade -> start_neutron_lbaas, start neutron-lbaas chain", > "INFO [alembic.runtime.migration] Running upgrade start_neutron_lbaas -> lbaasv2, lbaas version 2 api", > "INFO [alembic.runtime.migration] Running upgrade lbaasv2 -> 4deef6d81931, add provisioning and operating statuses", > "INFO [alembic.runtime.migration] Running upgrade 4deef6d81931 -> 4b6d8d5310b8, add_index_tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 4b6d8d5310b8 -> 364f9b6064f0, agentv2", > "INFO [alembic.runtime.migration] Running upgrade 364f9b6064f0 -> lbaasv2_tls, lbaasv2 TLS", > "INFO [alembic.runtime.migration] Running upgrade lbaasv2_tls -> 4ba00375f715, edge_driver", > "INFO [alembic.runtime.migration] Running upgrade 4ba00375f715 -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 3345facd0452, Initial Liberty no-op expand script.", > "INFO [alembic.runtime.migration] Running upgrade 3345facd0452 -> 4a408dd491c2, Addition of Name column to lbaas_members and lbaas_healthmonitors table", > "INFO [alembic.runtime.migration] Running upgrade 4a408dd491c2 -> 3426acbc12de, Add flavor id", > "INFO [alembic.runtime.migration] Running upgrade 3426acbc12de -> 6aee0434f911, independent pools", > "INFO [alembic.runtime.migration] Running upgrade 6aee0434f911 -> 3543deab1547, add_l7_tables", > "INFO [alembic.runtime.migration] Running upgrade 3543deab1547 -> 62deca5010cd, Add tenant-id index for L7 tables", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 130ebfdef43, Initial Liberty no-op contract revision.", > "INFO [alembic.runtime.migration] Running upgrade 130ebfdef43 -> 4b4dc6d5d843, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade 4b4dc6d5d843 -> e6417a8b114d, Drop v1 tables", > "INFO [alembic.runtime.migration] Running upgrade 62deca5010cd -> 844352f9fe6f, Add healthmonitor max retries down", > "INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 53a3254aa95e, Initial Liberty no-op expand script.", > "INFO [alembic.runtime.migration] Running upgrade 53a3254aa95e -> 28430956782d, nsxv3_security_groups", > "INFO [alembic.runtime.migration] Running upgrade 28430956782d -> 279b70ac3ae8, NSXv3 Add l2gwconnection table", > "INFO [alembic.runtime.migration] Running upgrade 279b70ac3ae8 -> 312211a5725f, nsxv_lbv2", > "INFO [alembic.runtime.migration] Running upgrade 312211a5725f -> 2af850eb3970, update nsxv tz binding type", > "INFO [alembic.runtime.migration] Running upgrade 2af850eb3970 -> 69fb78b33d41, NSXv add dns search domain to subnets", > "INFO [alembic.runtime.migration] Running upgrade 69fb78b33d41 -> 20483029f1ff, update nsx_v3 tz_network_bindings_binding_type", > "INFO [alembic.runtime.migration] Running upgrade 20483029f1ff -> 4c45bcadccf9, extend_secgroup_rule", > "INFO [alembic.runtime.migration] Running upgrade 4c45bcadccf9 -> 2c87aedb206f, nsxv_security_group_logging", > "INFO [alembic.runtime.migration] Running upgrade 2c87aedb206f -> 3e4dccfe6fb4, NSXv add dns search domain to subnets", > "INFO [alembic.runtime.migration] Running upgrade 3e4dccfe6fb4 -> 967462f585e1, add dvs_id column to neutron_nsx_network_mappings", > "INFO [alembic.runtime.migration] Running upgrade 967462f585e1 -> b7f41687cbad, nsxv3_qos_policy_mapping", > "INFO [alembic.runtime.migration] Running upgrade b7f41687cbad -> c288bb6a7252, NSXv add resource pool to the router bindings table", > "INFO [alembic.runtime.migration] Running upgrade c288bb6a7252 -> c644ec62c585, NSXv3 add nsx_service_bindings and nsx_dhcp_bindings tables", > "INFO [alembic.runtime.migration] Running upgrade c644ec62c585 -> 5e564e781d77, add nsx binding type", > "INFO [alembic.runtime.migration] Running upgrade 5e564e781d77 -> aede17d51d0f, add timestamp", > "INFO [alembic.runtime.migration] Running upgrade aede17d51d0f -> 7e46906f8997, lbaas foreignkeys", > "INFO [alembic.runtime.migration] Running upgrade 7e46906f8997 -> 86a55205337c, NSXv add availability zone to the router bindings table instead of", > "the resource pool column", > "INFO [alembic.runtime.migration] Running upgrade 86a55205337c -> 633514d94b93, Add support for TaaS", > "INFO [alembic.runtime.migration] Running upgrade 633514d94b93 -> 1b4eaffe4f31, NSX Adds a 'provider' attribute to security-group", > "INFO [alembic.runtime.migration] Running upgrade 1b4eaffe4f31 -> 6e6da8296c0e, Add support for IPAM in NSXv", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 393bf843b96, Initial Liberty no-op contract script.", > "INFO [alembic.runtime.migration] Running upgrade 393bf843b96 -> 3c88bdea3054, nsxv_vdr_dhcp_binding.py", > "INFO [alembic.runtime.migration] Running upgrade 3c88bdea3054 -> 5ed1ffbc0d2a, nsxv_security_group_logging", > "INFO [alembic.runtime.migration] Running upgrade 5ed1ffbc0d2a -> 081af0e396d7, nsxv3_secgroup_local_ip_prefix", > "INFO [alembic.runtime.migration] Running upgrade 081af0e396d7 -> dbe29d208ac6, NSXv add DHCP MTU to subnets", > "INFO [alembic.runtime.migration] Running upgrade dbe29d208ac6 -> d49ac91b560e, Support shared pools with NSXv LBaaSv2 driver", > "INFO [alembic.runtime.migration] Running upgrade d49ac91b560e -> 5c8f451290b7, nsxv_subnet_ipam rename to nsx_subnet_ipam", > "INFO [alembic.runtime.migration] Running upgrade 5c8f451290b7 -> 14a89ddf96e2, NSX Adds a 'availability_zone' attribute to internal-networks table", > "INFO [alembic.runtime.migration] Running upgrade 14a89ddf96e2 -> 8c0a81a07691, Update the primary key constraint of nsx_subnet_ipam", > "INFO [alembic.runtime.migration] Running upgrade 8c0a81a07691 -> 84ceffa27115, remove the foreign key constrain from nsxv3_qos_policy_mapping", > "INFO [alembic.runtime.migration] Running upgrade 84ceffa27115 -> a1be06050b41, update nsx binding types", > "INFO [alembic.runtime.migration] Running upgrade a1be06050b41 -> 717f7f63a219, nsxv3_lbaas_l7policy", > "INFO [alembic.runtime.migration] Running upgrade 6e6da8296c0e -> 7b5ec3caa9a4, Fix the availability zones default value in the router bindings table", > "INFO [alembic.runtime.migration] Running upgrade 7b5ec3caa9a4 -> e816d4fe9d4f, NSX Adds a 'policy' attribute to security-group", > "INFO [alembic.runtime.migration] Running upgrade e816d4fe9d4f -> dd9fe5a3a526, NSX Adds certificate table for client certificate management", > "INFO [alembic.runtime.migration] Running upgrade dd9fe5a3a526 -> 01a33f93f5fd, nsxv_lbv2_l7policy", > "INFO [alembic.runtime.migration] Running upgrade 01a33f93f5fd -> e4c503f4133f, Port vnic_type support", > "INFO [alembic.runtime.migration] Running upgrade e4c503f4133f -> 7c4704ad37df, Fix NSX Lbaas L7 policy table creation", > "INFO [alembic.runtime.migration] Running upgrade 7c4704ad37df -> 8699700cd95c, nsxv_bgp_speaker_mapping", > "INFO [alembic.runtime.migration] Running upgrade 8699700cd95c -> 53eb497903a4, Drop VDR DHCP bindings table", > "INFO [alembic.runtime.migration] Running upgrade 53eb497903a4 -> ea7a72ab9643", > "INFO [alembic.runtime.migration] Running upgrade ea7a72ab9643 -> 9799427fc0e1, nsx map project to plugin", > "INFO [alembic.runtime.migration] Running upgrade 9799427fc0e1 -> 0dbeda408e41, nsxv3_vpn_mapping", > "stdout: ac41920be6301d2c401e9f03851c3f56922c0af4ef0b6067d09dbafe64b51d4e", > "stdout: 2ba036e41a7b3f0343958b3c0909453cf8fc1d82640ea2067f5ffc2446cae644", > "stdout: f6f2bf916bb057472ad8be24a76cdb6ecae6e6a20cd5283d2ca488a49774d31e", > "stdout: (cellv2) Creating default cell_v2 cell", > "stdout: 9f2bc5db40096414c4ef0c0cf335fe240cb8b045200d5977a7531f2cbf06435c", > "stdout: 1d0c4d073f5dd27cdf3fc58e7bc2e81c91e3a18dcd14212d7abb2280e22b9a1c", > "stderr: /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')", > " result = self._query(query)", > "/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')", > "stdout: 2f09844ad66d4034838a73f43ac8633b4f0beed827d9e5334c375432c371162b" > ] >} >2018-06-25 06:17:58,933 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-libvirt ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-libvirt", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c39bfe26f6c5: Pulling fs layer", > "c39bfe26f6c5: Verifying Checksum", > "c39bfe26f6c5: Download complete", > "c39bfe26f6c5: Pull complete", > "Digest: sha256:ddf9894ce80fe045252534284d3aa3e1d156aca8a8eeca908571558e3b54428f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", > "", > "stderr: ", > "stdout: \u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend\u001b[0m", > "\u001b[mNotice: Compiled catalog for compute-0.localdomain in environment production in 1.25 seconds\u001b[0m", > "\u001b[0;32mInfo: Applying configuration version '1529921826'\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/ensure: created\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]/Vs_bridge[br-isolated]/external_ids: external_ids changed '' to 'bridge-id=br-isolated'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]\u001b[0m", > "\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m", > "\u001b[mNotice: Applied catalog in 0.20 seconds\u001b[0m", > "stderr: Running in chroot, ignoring request.", > "\u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "\u001b[1;33mWarning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)\u001b[0m", > "\u001b[1;33mWarning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "\u001b[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')\u001b[0m", > "stdout: c24a73a29c96396420d430bea2755631377956e03c0364388ea949ceb666696e", > "stdout: 6cedc158e651c90f5a7fff4d4fc8d91f24fd5ef85cc48423c0c2ef42417ea233", > "stdout: 47f27aea47b5ea7ac259690af4f25964a5dfb304ee6600331d8b408cc4ad8d38" > ] >} >2018-06-25 06:17:58,956 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-25 06:17:58,981 p=25239 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks3.json exists] ******** >2018-06-25 06:17:59,466 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:17:59,499 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:17:59,505 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1529920799.4718244, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "730e4e048205e1fadc6cd518326d4622d77edad6", "ctime": 1529920799.4748244, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 92274854, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0600", "mtime": 1529920799.2578156, "nlink": 1, "path": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 397, "uid": 0, "version": "18446744071656433489", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-25 06:17:59,531 p=25239 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 3] ******************** >2018-06-25 06:17:59,591 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:17:59,604 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:38,818 p=25239 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:38,844 p=25239 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 3] *** >2018-06-25 06:20:38,912 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-25 10:18:00,078 INFO: 88193 -- Running docker-puppet", > "2018-06-25 10:18:00,078 INFO: 88193 -- Service compilation completed.", > "2018-06-25 10:18:00,079 INFO: 88193 -- Starting multiprocess configuration steps. Using 8 processes.", > "2018-06-25 10:18:00,095 INFO: 88196 -- Starting configuration of keystone_init_tasks using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-25 10:18:00,097 INFO: 88196 -- Removing container: docker-puppet-keystone_init_tasks", > "2018-06-25 10:18:00,151 INFO: 88196 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-25 10:20:38,754 INFO: 88196 -- Removing container: docker-puppet-keystone_init_tasks", > "2018-06-25 10:20:38,818 INFO: 88196 -- Finished processing puppet configs for keystone_init_tasks" > ] >} >2018-06-25 06:20:38,913 p=25239 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:20:38,923 p=25239 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:20:38,929 p=25239 u=mistral | PLAY [External deployment step 4] ********************************************** >2018-06-25 06:20:38,952 p=25239 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-25 06:20:38,976 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:38,995 p=25239 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-25 06:20:39,026 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,029 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,035 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,057 p=25239 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-25 06:20:39,076 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,095 p=25239 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-25 06:20:39,116 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,135 p=25239 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-25 06:20:39,154 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,172 p=25239 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-25 06:20:39,197 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,220 p=25239 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-25 06:20:39,241 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,261 p=25239 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-25 06:20:39,279 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,299 p=25239 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-25 06:20:39,319 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,337 p=25239 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-25 06:20:39,356 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,375 p=25239 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-25 06:20:39,395 p=25239 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,415 p=25239 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-25 06:20:39,433 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,452 p=25239 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-25 06:20:39,473 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,491 p=25239 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-25 06:20:39,515 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,536 p=25239 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-25 06:20:39,557 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,577 p=25239 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-25 06:20:39,600 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,619 p=25239 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-25 06:20:39,640 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,659 p=25239 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-25 06:20:39,681 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,700 p=25239 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-25 06:20:39,721 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,726 p=25239 u=mistral | PLAY [Overcloud deploy step tasks for 4] *************************************** >2018-06-25 06:20:39,754 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:20:39,786 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,812 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,831 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,859 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:20:39,895 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,923 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,938 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:39,962 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:20:39,994 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,024 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,039 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,064 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:20:40,098 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,127 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,141 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,166 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:20:40,202 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,227 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,244 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,250 p=25239 u=mistral | PLAY [Overcloud common deploy step tasks 4] ************************************ >2018-06-25 06:20:40,278 p=25239 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-25 06:20:40,310 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,338 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,354 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,381 p=25239 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-25 06:20:40,413 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,443 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,457 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,482 p=25239 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-25 06:20:40,517 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,549 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,565 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,594 p=25239 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-25 06:20:40,630 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,658 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,672 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,697 p=25239 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-25 06:20:40,731 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,760 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,774 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,796 p=25239 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-25 06:20:40,825 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,854 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,869 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:40,942 p=25239 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-25 06:20:41,015 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,016 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,017 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,018 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,019 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,020 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,023 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,067 p=25239 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-25 06:20:41,107 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,138 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,139 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,139 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,140 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,140 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,142 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,144 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,157 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,157 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,166 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,166 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,167 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,168 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,172 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,175 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,180 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,184 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,208 p=25239 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-25 06:20:41,241 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,269 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,290 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:20:41,318 p=25239 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-25 06:20:41,352 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,378 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,392 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,417 p=25239 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-25 06:20:41,480 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,488 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,513 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB', u'DB_ROOT_PASSWORD=ufdBL6tH5c'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB", "DB_ROOT_PASSWORD=ufdBL6tH5c"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,535 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'fLWtJZCynkwHz2bnZopp1aRC2'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "fLWtJZCynkwHz2bnZopp1aRC2"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,560 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,565 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,566 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,569 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,571 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,578 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,582 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,604 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,617 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,671 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,671 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,671 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,672 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,673 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,694 p=25239 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-25 06:20:41,724 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,749 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,762 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,783 p=25239 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-25 06:20:41,859 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,860 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,862 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,864 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,868 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,877 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,878 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,881 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,886 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,983 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,984 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,990 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:41,996 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,000 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,005 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,011 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,015 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,020 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,026 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,030 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,035 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,039 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,042 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,049 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,052 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,063 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,064 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,067 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,071 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,088 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,088 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,089 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,090 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,097 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,099 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,103 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,108 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,113 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,117 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,121 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,125 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,130 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,134 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,138 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,142 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,146 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,150 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,155 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,159 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,170 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,171 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,171 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,176 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,181 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,186 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,188 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,193 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,197 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,201 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,206 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,210 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,215 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,232 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,233 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,240 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,241 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,242 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,243 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,254 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,255 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,255 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,264 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,265 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,267 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,312 p=25239 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-25 06:20:42,326 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:20:42,352 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:20:42,379 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:20:42,406 p=25239 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-25 06:20:42,462 p=25239 u=mistral | skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,501 p=25239 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-25 06:20:42,534 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,559 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,575 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:20:42,596 p=25239 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-25 06:20:43,359 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529922042.66-183635554558815/source", "state": "file", "uid": 0} >2018-06-25 06:20:43,369 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529922042.7-230870915678210/source", "state": "file", "uid": 0} >2018-06-25 06:20:43,392 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529922042.63-157368485974145/source", "state": "file", "uid": 0} >2018-06-25 06:20:43,417 p=25239 u=mistral | TASK [Run puppet host configuration for step 4] ******************************** >2018-06-25 06:20:58,537 p=25239 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:20:58,582 p=25239 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:21:03,374 p=25239 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:21:03,400 p=25239 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 4] *** >2018-06-25 06:21:03,463 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.01 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}584f6152b1e25415942779f3d2373f3d'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 9.56 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 226", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Concat file: 0.00", > " File line: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Augeas: 0.01", > " Firewall: 0.02", > " File: 0.24", > " Pcmk resource default: 0.34", > " Pcmk property: 0.41", > " Package: 0.45", > " Service: 0.63", > " Total: 11.74", > " Last run: 1529922063", > " Config retrieval: 3.57", > " Exec: 6.05", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1529922049", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 39]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:21:03,493 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.04 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}d7e89518f7d9ec85a8c5f9f3478a61a4'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 6.88 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 150", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.22", > " Package: 0.27", > " Service: 0.52", > " Last run: 1529922058", > " Config retrieval: 2.34", > " Exec: 5.30", > " Concat fragment: 0.00", > " Total: 8.68", > "Version:", > " Config: 1529922049", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 37]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:21:03,513 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 2.05 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}6eaa9d3c8bc4690552edef0426377b25'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 6.77 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 144", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.18", > " Package: 0.26", > " Service: 0.45", > " Last run: 1529922058", > " Config retrieval: 2.35", > " Exec: 5.27", > " Total: 8.55", > " Filebucket: 0.00", > "Version:", > " Config: 1529922049", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 37]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:21:03,539 p=25239 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 4] ***************** >2018-06-25 06:21:03,569 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:03,595 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:03,607 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:03,635 p=25239 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 4] *** >2018-06-25 06:21:03,668 p=25239 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:21:03,699 p=25239 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:21:03,712 p=25239 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:21:03,738 p=25239 u=mistral | TASK [Start containers for step 4] ********************************************* >2018-06-25 06:21:04,676 p=25239 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:08,053 p=25239 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:33,573 p=25239 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:33,599 p=25239 u=mistral | TASK [Debug output for task which failed: Start containers for step 4] ********* >2018-06-25 06:21:33,690 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "cb7d08d4cc0c: Already exists", > "ee85156498b3: Pulling fs layer", > "ee85156498b3: Verifying Checksum", > "ee85156498b3: Download complete", > "ee85156498b3: Pull complete", > "Digest: sha256:ea8f91c94969dd9ddfe978bf52c432130b41bac65c0af6518a32d7e852d269a2", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-listener ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-listener", > "87891b1a71a5: Pulling fs layer", > "87891b1a71a5: Verifying Checksum", > "87891b1a71a5: Download complete", > "87891b1a71a5: Pull complete", > "Digest: sha256:68015593b63e00bc1f41e3b446a2019b6cffda8cce39f0e4ce7cb3237fe8fbfa", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-notifier ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-notifier", > "61805ac4d2bb: Pulling fs layer", > "61805ac4d2bb: Verifying Checksum", > "61805ac4d2bb: Download complete", > "61805ac4d2bb: Pull complete", > "Digest: sha256:5e596d490bf916b566d26a0b70e03198e6c839cf46c5129e60f1d68bbe71f920", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent", > "ea1d509b6f44: Already exists", > "75b3c56ec939: Pulling fs layer", > "75b3c56ec939: Download complete", > "75b3c56ec939: Pull complete", > "Digest: sha256:f43a960bfd0618ddcf4868f48fec217cfdc26bcef8ede696de8adc8a199ecead", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent", > "6f5e633fcce0: Pulling fs layer", > "6f5e633fcce0: Verifying Checksum", > "6f5e633fcce0: Download complete", > "6f5e633fcce0: Pull complete", > "Digest: sha256:d402d9bde0a474496dcf1d33bb766f7a1cffafda7f30b4bd8560817d018504b7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-conductor ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-conductor", > "0e3031608420: Already exists", > "102127465b5b: Pulling fs layer", > "102127465b5b: Download complete", > "102127465b5b: Pull complete", > "Digest: sha256:7171aef8364a04f7a40d17ba59f63fd8f829e6b97ecb65d8af1688eb7065fda3", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth", > "01d1d7c271b1: Pulling fs layer", > "01d1d7c271b1: Verifying Checksum", > "01d1d7c271b1: Download complete", > "01d1d7c271b1: Pull complete", > "Digest: sha256:b59d74a9873a382616c808ddf544ef63af4c4e299e3418e26288adec644b5ddf", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy", > "86edb10b8a50: Pulling fs layer", > "86edb10b8a50: Verifying Checksum", > "86edb10b8a50: Download complete", > "86edb10b8a50: Pull complete", > "Digest: sha256:3572771c07ea7a5e47193b304b86a473fdc30bef4d492721543f3012f4742888", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-scheduler ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-scheduler", > "c8d634287ee3: Pulling fs layer", > "c8d634287ee3: Verifying Checksum", > "c8d634287ee3: Download complete", > "c8d634287ee3: Pull complete", > "Digest: sha256:8bfb59b1ea5b1cb2ffc43f439c339130459894023559c08c098e330431c1a354", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-engine ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-engine", > "6c5f7e9a0fe8: Already exists", > "88f0d8af6f23: Pulling fs layer", > "88f0d8af6f23: Verifying Checksum", > "88f0d8af6f23: Download complete", > "88f0d8af6f23: Pull complete", > "Digest: sha256:b9616a7c1034521973cc62e436571a633357d8e56c7416596b98b79e748ebc08", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-container ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-container", > "a98c7da29d65: Already exists", > "9a05208e5890: Pulling fs layer", > "9a05208e5890: Verifying Checksum", > "9a05208e5890: Download complete", > "9a05208e5890: Pull complete", > "Digest: sha256:93e7998c9d7e6afec02cd695a20f85eae7f99167fd2fa66775b2601f8352a55f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-object ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-object", > "d02d689952a8: Pulling fs layer", > "d02d689952a8: Verifying Checksum", > "d02d689952a8: Download complete", > "d02d689952a8: Pull complete", > "Digest: sha256:64f2e726823838ab7dae517e0e0e72158642c6a54e4126b476c22ed538e4c660", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", > "stdout: 54fbe926a5e8607a483656cd02275519dafa7eee34fc3a2231d65dc1eb18d149", > "stdout: 6ca3975d19d7770633d107213f948b4e583e1a1831140001b15ade107c79ac37", > "stdout: e2a6ec2be8195073394c1adf3a7f6a51a44febd9b766a74631b0e70097deee58", > "stdout: 8e5caca7e92ee7200d9a30764fe79c4f2bafe1703ec95902b108d474151a3410", > "stdout: 181f0af5c3fd1d134e6bee10e0670424316a58d59fd9d917e5d57fdd4b12486f", > "stdout: 87b75fd306e8132a26d75ce40254f33f219e5256af2a1258ba278ac6f0194171", > "stdout: 1598e9bfde9eb823b6cab29e1718a937ffdb16bf7efd7c4c41079e3ff6362441", > "stdout: dfbaf850f6f66208264b54a289d8612b6cb7ff4480a56c72651868a56b066ea2", > "stdout: 982d445be9b9800e8192a8496d615a350987dcbd48c7891ae5432fdd8c77e072", > "stdout: 50c89f9558826f2b06d4b57fe447fe3d2253ce3ca37d9816a1d17e1144f2804e", > "stdout: fc50c2ba8152ae36fc7a7a360ff83c2c2bd6e34bceca1222c4907a762fa95d26", > "stdout: e0d2dcb0a22b780327e674b76313f55cd63643fef9171809240be40c8ca87223", > "stdout: Running command: '/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128'", > "stderr: + sudo -E kolla_set_configs", > "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Deleting /etc/gnocchi/gnocchi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/gnocchi/gnocchi.conf to /etc/gnocchi/gnocchi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-gnocchi_wsgi.conf to /etc/httpd/conf.d/10-gnocchi_wsgi.conf", > "INFO:__main__:Deleting /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/ssl.conf to /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/access_compat.load to /etc/httpd/conf.modules.d/access_compat.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/actions.load to /etc/httpd/conf.modules.d/actions.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.conf to /etc/httpd/conf.modules.d/alias.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.load to /etc/httpd/conf.modules.d/alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_basic.load to /etc/httpd/conf.modules.d/auth_basic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_digest.load to /etc/httpd/conf.modules.d/auth_digest.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_anon.load to /etc/httpd/conf.modules.d/authn_anon.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_core.load to /etc/httpd/conf.modules.d/authn_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_dbm.load to /etc/httpd/conf.modules.d/authn_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_file.load to /etc/httpd/conf.modules.d/authn_file.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_core.load to /etc/httpd/conf.modules.d/authz_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_dbm.load to /etc/httpd/conf.modules.d/authz_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_groupfile.load to /etc/httpd/conf.modules.d/authz_groupfile.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_host.load to /etc/httpd/conf.modules.d/authz_host.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_owner.load to /etc/httpd/conf.modules.d/authz_owner.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_user.load to /etc/httpd/conf.modules.d/authz_user.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.conf to /etc/httpd/conf.modules.d/autoindex.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.load to /etc/httpd/conf.modules.d/autoindex.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cache.load to /etc/httpd/conf.modules.d/cache.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cgi.load to /etc/httpd/conf.modules.d/cgi.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav.load to /etc/httpd/conf.modules.d/dav.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.conf to /etc/httpd/conf.modules.d/dav_fs.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.load to /etc/httpd/conf.modules.d/dav_fs.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.conf to /etc/httpd/conf.modules.d/deflate.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.load to /etc/httpd/conf.modules.d/deflate.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.conf to /etc/httpd/conf.modules.d/dir.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.load to /etc/httpd/conf.modules.d/dir.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/env.load to /etc/httpd/conf.modules.d/env.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/expires.load to /etc/httpd/conf.modules.d/expires.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ext_filter.load to /etc/httpd/conf.modules.d/ext_filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/filter.load to /etc/httpd/conf.modules.d/filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/include.load to /etc/httpd/conf.modules.d/include.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/log_config.load to /etc/httpd/conf.modules.d/log_config.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/logio.load to /etc/httpd/conf.modules.d/logio.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.conf to /etc/httpd/conf.modules.d/mime.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.load to /etc/httpd/conf.modules.d/mime.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.conf to /etc/httpd/conf.modules.d/mime_magic.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.load to /etc/httpd/conf.modules.d/mime_magic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.conf to /etc/httpd/conf.modules.d/negotiation.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.load to /etc/httpd/conf.modules.d/negotiation.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.conf to /etc/httpd/conf.modules.d/prefork.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.load to /etc/httpd/conf.modules.d/prefork.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/rewrite.load to /etc/httpd/conf.modules.d/rewrite.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.conf to /etc/httpd/conf.modules.d/setenvif.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.load to /etc/httpd/conf.modules.d/setenvif.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/socache_shmcb.load to /etc/httpd/conf.modules.d/socache_shmcb.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/speling.load to /etc/httpd/conf.modules.d/speling.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ssl.load to /etc/httpd/conf.modules.d/ssl.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.conf to /etc/httpd/conf.modules.d/status.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.load to /etc/httpd/conf.modules.d/status.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/substitute.load to /etc/httpd/conf.modules.d/substitute.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/suexec.load to /etc/httpd/conf.modules.d/suexec.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/systemd.load to /etc/httpd/conf.modules.d/systemd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/unixd.load to /etc/httpd/conf.modules.d/unixd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/usertrack.load to /etc/httpd/conf.modules.d/usertrack.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/version.load to /etc/httpd/conf.modules.d/version.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/vhost_alias.load to /etc/httpd/conf.modules.d/vhost_alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.conf to /etc/httpd/conf.modules.d/wsgi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.load to /etc/httpd/conf.modules.d/wsgi.load", > "INFO:__main__:Deleting /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/httpd.conf to /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/ports.conf to /etc/httpd/conf/ports.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/tripleo.cnf to /etc/my.cnf.d/tripleo.cnf", > "INFO:__main__:Creating directory /etc/systemd/system/httpd.service.d", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/systemd/system/httpd.service.d/httpd.conf to /etc/systemd/system/httpd.service.d/httpd.conf", > "INFO:__main__:Creating directory /var/www/cgi-bin/gnocchi", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/gnocchi/app to /var/www/cgi-bin/gnocchi/app", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.conf to /etc/ceph/ceph.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.admin.keyring to /etc/ceph/ceph.client.admin.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mon.keyring to /etc/ceph/ceph.mon.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mgr.controller-0.keyring to /etc/ceph/ceph.mgr.controller-0.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.manila.keyring to /etc/ceph/ceph.client.manila.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.radosgw.keyring to /etc/ceph/ceph.client.radosgw.keyring", > "INFO:__main__:Writing out command to execute", > "INFO:__main__:Setting permission for /var/log/gnocchi", > "INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring", > "++ cat /run_command", > "+ CMD='/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128'", > "+ ARGS=", > "+ [[ ! -n '' ]]", > "+ . kolla_extend_start", > "++ GNOCCHI_LOG_DIR=/var/log/kolla/gnocchi", > "++ [[ ! -d /var/log/kolla/gnocchi ]]", > "++ mkdir -p /var/log/kolla/gnocchi", > "+++ stat -c %U:%G /var/log/kolla/gnocchi", > "++ [[ root:kolla != \\g\\n\\o\\c\\c\\h\\i\\:\\k\\o\\l\\l\\a ]]", > "++ chown gnocchi:kolla /var/log/kolla/gnocchi", > "+++ stat -c %a /var/log/kolla/gnocchi", > "++ [[ 2755 != \\7\\5\\5 ]]", > "++ chmod 755 /var/log/kolla/gnocchi", > "++ . /usr/local/bin/kolla_gnocchi_extend_start", > "+++ [[ rhel =~ debian|ubuntu ]]", > "+++ rm -rf /var/run/httpd/htcacheclean /run/httpd/htcacheclean '/tmp/httpd*'", > "+++ [[ -n '' ]]", > "+ echo 'Running command: '\\''/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128'\\'''", > "+ exec /usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", > "2018-06-25 10:21:17,523 [22] WARNING oslo_config.cfg: Option \"coordination_url\" from group \"storage\" is deprecated. Use option \"coordination_url\" from group \"DEFAULT\".", > "2018-06-25 10:21:17,523 [22] INFO gnocchi.service: Gnocchi version 4.2.5", > "2018-06-25 10:21:17,523 [22] DEBUG gnocchi.service: ********************************************************************************", > "2018-06-25 10:21:17,523 [22] DEBUG gnocchi.service: Configuration options gathered from:", > "2018-06-25 10:21:17,523 [22] DEBUG gnocchi.service: command line args: ['--sacks-number=128']", > "2018-06-25 10:21:17,523 [22] DEBUG gnocchi.service: config files: ['/usr/share/gnocchi/gnocchi-dist.conf', '/etc/gnocchi/gnocchi.conf']", > "2018-06-25 10:21:17,523 [22] DEBUG gnocchi.service: ================================================================================", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: config_dir = []", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: config_file = ['/usr/share/gnocchi/gnocchi-dist.conf', '/etc/gnocchi/gnocchi.conf']", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: coordination_url = ****", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: debug = True", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: log_dir = /var/log/gnocchi", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: log_file = None", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: parallel_operations = 8", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: sacks_number = 128", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: skip_archive_policies_creation = False", > "2018-06-25 10:21:17,524 [22] DEBUG gnocchi.service: skip_incoming = False", > "2018-06-25 10:21:17,525 [22] DEBUG gnocchi.service: skip_index = False", > "2018-06-25 10:21:17,525 [22] DEBUG gnocchi.service: skip_storage = False", > "2018-06-25 10:21:17,525 [22] DEBUG gnocchi.service: syslog_log_facility = user", > "2018-06-25 10:21:17,525 [22] DEBUG gnocchi.service: use_journal = False", > "2018-06-25 10:21:17,525 [22] DEBUG gnocchi.service: use_syslog = False", > "2018-06-25 10:21:17,525 [22] DEBUG gnocchi.service: verbose = True", > "2018-06-25 10:21:17,525 [22] DEBUG gnocchi.service: statsd.archive_policy_name = low", > "2018-06-25 10:21:17,525 [22] DEBUG gnocchi.service: statsd.creator = None", > "2018-06-25 10:21:17,525 [22] DEBUG gnocchi.service: statsd.flush_delay = 10.0", > "2018-06-25 10:21:17,526 [22] DEBUG gnocchi.service: statsd.host = 0.0.0.0", > "2018-06-25 10:21:17,526 [22] DEBUG gnocchi.service: statsd.port = 8125", > "2018-06-25 10:21:17,526 [22] DEBUG gnocchi.service: statsd.resource_id = 0a8b55df-f90f-491c-8cb9-7cdecec6fc26", > "2018-06-25 10:21:17,526 [22] DEBUG gnocchi.service: incoming.ceph_conffile = /etc/ceph/ceph.conf", > "2018-06-25 10:21:17,526 [22] DEBUG gnocchi.service: incoming.ceph_keyring = /etc/ceph/ceph.client.openstack.keyring", > "2018-06-25 10:21:17,527 [22] DEBUG gnocchi.service: incoming.ceph_pool = metrics", > "2018-06-25 10:21:17,527 [22] DEBUG gnocchi.service: incoming.ceph_secret = ****", > "2018-06-25 10:21:17,527 [22] DEBUG gnocchi.service: incoming.ceph_timeout = 30", > "2018-06-25 10:21:17,527 [22] DEBUG gnocchi.service: incoming.ceph_username = openstack", > "2018-06-25 10:21:17,528 [22] DEBUG gnocchi.service: incoming.driver = redis", > "2018-06-25 10:21:17,528 [22] DEBUG gnocchi.service: incoming.file_basepath = /var/lib/gnocchi", > "2018-06-25 10:21:17,528 [22] DEBUG gnocchi.service: incoming.redis_url = redis://:vsVeY1UccHwAvv4JiUSXKTNn6@172.17.1.16:6379/", > "2018-06-25 10:21:17,528 [22] DEBUG gnocchi.service: incoming.s3_access_key_id = ", > "2018-06-25 10:21:17,528 [22] DEBUG gnocchi.service: incoming.s3_bucket_prefix = gnocchi", > "2018-06-25 10:21:17,529 [22] DEBUG gnocchi.service: incoming.s3_check_consistency_timeout = 60.0", > "2018-06-25 10:21:17,529 [22] DEBUG gnocchi.service: incoming.s3_endpoint_url = ", > "2018-06-25 10:21:17,529 [22] DEBUG gnocchi.service: incoming.s3_max_pool_connections = 50", > "2018-06-25 10:21:17,530 [22] DEBUG gnocchi.service: incoming.s3_region_name = ", > "2018-06-25 10:21:17,530 [22] DEBUG gnocchi.service: incoming.s3_secret_access_key = ", > "2018-06-25 10:21:17,530 [22] DEBUG gnocchi.service: incoming.swift_auth_insecure = False", > "2018-06-25 10:21:17,530 [22] DEBUG gnocchi.service: incoming.swift_auth_version = 1", > "2018-06-25 10:21:17,531 [22] DEBUG gnocchi.service: incoming.swift_authurl = http://localhost:8080/auth/v1.0", > "2018-06-25 10:21:17,531 [22] DEBUG gnocchi.service: incoming.swift_cacert = ", > "2018-06-25 10:21:17,531 [22] DEBUG gnocchi.service: incoming.swift_container_prefix = gnocchi", > "2018-06-25 10:21:17,531 [22] DEBUG gnocchi.service: incoming.swift_endpoint_type = publicURL", > "2018-06-25 10:21:17,532 [22] DEBUG gnocchi.service: incoming.swift_key = ****", > "2018-06-25 10:21:17,532 [22] DEBUG gnocchi.service: incoming.swift_preauthtoken = ****", > "2018-06-25 10:21:17,532 [22] DEBUG gnocchi.service: incoming.swift_project_domain_name = Default", > "2018-06-25 10:21:17,532 [22] DEBUG gnocchi.service: incoming.swift_project_name = ", > "2018-06-25 10:21:17,532 [22] DEBUG gnocchi.service: incoming.swift_region = ", > "2018-06-25 10:21:17,533 [22] DEBUG gnocchi.service: incoming.swift_service_type = object-store", > "2018-06-25 10:21:17,533 [22] DEBUG gnocchi.service: incoming.swift_timeout = 300", > "2018-06-25 10:21:17,533 [22] DEBUG gnocchi.service: incoming.swift_url = ", > "2018-06-25 10:21:17,533 [22] DEBUG gnocchi.service: incoming.swift_user = admin:admin", > "2018-06-25 10:21:17,534 [22] DEBUG gnocchi.service: incoming.swift_user_domain_name = Default", > "2018-06-25 10:21:17,534 [22] DEBUG gnocchi.service: metricd.greedy = True", > "2018-06-25 10:21:17,534 [22] DEBUG gnocchi.service: metricd.metric_cleanup_delay = 300", > "2018-06-25 10:21:17,534 [22] DEBUG gnocchi.service: metricd.metric_processing_delay = 30", > "2018-06-25 10:21:17,534 [22] DEBUG gnocchi.service: metricd.metric_reporting_delay = 120", > "2018-06-25 10:21:17,534 [22] DEBUG gnocchi.service: metricd.processing_replicas = 3", > "2018-06-25 10:21:17,534 [22] DEBUG gnocchi.service: metricd.workers = 4", > "2018-06-25 10:21:17,534 [22] DEBUG gnocchi.service: database.backend = sqlalchemy", > "2018-06-25 10:21:17,535 [22] DEBUG gnocchi.service: database.connection = ****", > "2018-06-25 10:21:17,535 [22] DEBUG gnocchi.service: database.connection_debug = 0", > "2018-06-25 10:21:17,535 [22] DEBUG gnocchi.service: database.connection_parameters = ", > "2018-06-25 10:21:17,535 [22] DEBUG gnocchi.service: database.connection_recycle_time = 3600", > "2018-06-25 10:21:17,535 [22] DEBUG gnocchi.service: database.connection_trace = False", > "2018-06-25 10:21:17,535 [22] DEBUG gnocchi.service: database.db_inc_retry_interval = True", > "2018-06-25 10:21:17,535 [22] DEBUG gnocchi.service: database.db_max_retries = 20", > "2018-06-25 10:21:17,536 [22] DEBUG gnocchi.service: database.db_max_retry_interval = 10", > "2018-06-25 10:21:17,536 [22] DEBUG gnocchi.service: database.db_retry_interval = 1", > "2018-06-25 10:21:17,536 [22] DEBUG gnocchi.service: database.max_overflow = 50", > "2018-06-25 10:21:17,536 [22] DEBUG gnocchi.service: database.max_pool_size = 5", > "2018-06-25 10:21:17,536 [22] DEBUG gnocchi.service: database.max_retries = 10", > "2018-06-25 10:21:17,536 [22] DEBUG gnocchi.service: database.min_pool_size = 1", > "2018-06-25 10:21:17,536 [22] DEBUG gnocchi.service: database.mysql_enable_ndb = False", > "2018-06-25 10:21:17,537 [22] DEBUG gnocchi.service: database.mysql_sql_mode = TRADITIONAL", > "2018-06-25 10:21:17,537 [22] DEBUG gnocchi.service: database.pool_timeout = None", > "2018-06-25 10:21:17,537 [22] DEBUG gnocchi.service: database.retry_interval = 10", > "2018-06-25 10:21:17,537 [22] DEBUG gnocchi.service: database.slave_connection = ****", > "2018-06-25 10:21:17,537 [22] DEBUG gnocchi.service: database.sqlite_synchronous = True", > "2018-06-25 10:21:17,537 [22] DEBUG gnocchi.service: database.use_db_reconnect = False", > "2018-06-25 10:21:17,537 [22] DEBUG gnocchi.service: storage.ceph_conffile = /etc/ceph/ceph.conf", > "2018-06-25 10:21:17,537 [22] DEBUG gnocchi.service: storage.ceph_keyring = /etc/ceph/ceph.client.openstack.keyring", > "2018-06-25 10:21:17,537 [22] DEBUG gnocchi.service: storage.ceph_pool = metrics", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.ceph_secret = ****", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.ceph_timeout = 30", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.ceph_username = openstack", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.driver = ceph", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.file_basepath = /var/lib/gnocchi", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.redis_url = redis://localhost:6379/", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.s3_access_key_id = None", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.s3_bucket_prefix = gnocchi", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.s3_check_consistency_timeout = 60.0", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.s3_endpoint_url = None", > "2018-06-25 10:21:17,538 [22] DEBUG gnocchi.service: storage.s3_max_pool_connections = 50", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.s3_region_name = None", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.s3_secret_access_key = None", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_auth_insecure = False", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_auth_version = 1", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_authurl = http://localhost:8080/auth/v1.0", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_cacert = None", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_container_prefix = gnocchi", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_endpoint_type = publicURL", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_key = ****", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_preauthtoken = ****", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_project_domain_name = Default", > "2018-06-25 10:21:17,539 [22] DEBUG gnocchi.service: storage.swift_project_name = None", > "2018-06-25 10:21:17,540 [22] DEBUG gnocchi.service: storage.swift_region = None", > "2018-06-25 10:21:17,540 [22] DEBUG gnocchi.service: storage.swift_service_type = object-store", > "2018-06-25 10:21:17,540 [22] DEBUG gnocchi.service: storage.swift_timeout = 300", > "2018-06-25 10:21:17,540 [22] DEBUG gnocchi.service: storage.swift_url = None", > "2018-06-25 10:21:17,540 [22] DEBUG gnocchi.service: storage.swift_user = admin:admin", > "2018-06-25 10:21:17,540 [22] DEBUG gnocchi.service: storage.swift_user_domain_name = Default", > "2018-06-25 10:21:17,540 [22] DEBUG gnocchi.service: indexer.url = ****", > "2018-06-25 10:21:17,540 [22] DEBUG gnocchi.service: api.auth_mode = keystone", > "2018-06-25 10:21:17,540 [22] DEBUG gnocchi.service: api.host = 0.0.0.0", > "2018-06-25 10:21:17,541 [22] DEBUG gnocchi.service: api.max_limit = 1000", > "2018-06-25 10:21:17,541 [22] DEBUG gnocchi.service: api.operation_timeout = 10", > "2018-06-25 10:21:17,541 [22] DEBUG gnocchi.service: api.paste_config = api-paste.ini", > "2018-06-25 10:21:17,541 [22] DEBUG gnocchi.service: api.port = 8041", > "2018-06-25 10:21:17,541 [22] DEBUG gnocchi.service: api.uwsgi_mode = http", > "2018-06-25 10:21:17,541 [22] DEBUG gnocchi.service: archive_policy.default_aggregation_methods = ['mean', 'min', 'max', 'sum', 'std', 'count']", > "2018-06-25 10:21:17,541 [22] DEBUG gnocchi.service: ********************************************************************************", > "2018-06-25 10:21:17,928 [22] INFO gnocchi.cli.manage: Upgrading indexer SQLAlchemyIndexer: mysql+pymysql://gnocchi:DjA4zFq3WPxVVEZMnXrckbxoJ@172.17.1.15/gnocchi?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf", > "2018-06-25 10:21:18,093 [22] INFO gnocchi.common.ceph: Ceph storage backend use 'cradox' python library", > "2018-06-25 10:21:18,136 [22] INFO gnocchi.cli.manage: Upgrading storage CephStorage: 78ace352-763a-11e8-9c1d-525400166144", > "2018-06-25 10:21:18,137 [22] INFO gnocchi.cli.manage: Upgrading incoming storage RedisStorage: StrictRedis<ConnectionPool<Connection<host=172.17.1.16,port=6379,db=0>>>", > "stdout: 087c10a3e0969a24e7e6d96ef75bfe243e1f64428bf20c45106fb79880749111", > "stdout: 0f8fff15e3aaf6171af0d0c8384c721ace8f60627487bb0b97ba6da54059d774", > "stdout: 5161a36bc616227e3728edbdcda83f654611107ec7c26acab6a995c7b6a10492", > "stdout: a06cde7679b66d14ce73d41c60f89d63536148f7fae69463884141c442908938", > "stdout: 684346e0fbb9870e8c922fd88a611b01b8218eb1c7dc246ec17f73e2e028f809", > "stdout: 5a6946d3ca8b27128e43717fb1e70444e390d5addf403f15e3a2e19c751d6874", > "stdout: d2309323b2d6ec526ffb62f9c437c758f0dfff3c7ab83d4fb913260a1e958b7f", > "stdout: 9b48e4eedf3ead6ef5d799c734a3382d118e645cd62fc839a343188cd900a977", > "stdout: 793a531079cecc62f8307d42187d4b2d542869f2582a410ee4b88dbc6e359c5b", > "stdout: 90fa2e232423c46db101aee0b81f9367e68913cfc13fd10204894aa960a7d26c", > "stdout: 7cb7600d6ce1f71d7b4482e8ffd8e31b37d5de56d6001513ebb1f5056a9a4347", > "stdout: d1c353ec5ea2f03c82c41969124d7a353855541571d4574a7f8a73b0737f0d4b", > "stdout: 5530dbdcbfee21306000f048295aee6196262a95ab8e4862afe2159296ee6440", > "stdout: b678fa977730bf5b8284b3d12b73376d7e2240e036143e39cee0bb94720d9e1c", > "stdout: 9341db25b4ad0b483b50088ec94dc5243e180f7badbe3c3c203ef309f6fc3d75", > "stdout: 78921b749ec4d717b8edbe93dafd59797d21e4ff7af0050c4bf0e533e361e62e", > "stdout: f3e1d6431d66200fad9ca9662f655d41d67568a7fa7b983f3ed11251007fb717", > "stdout: d242185d208f433c6a0c14b1325c024585659545fbdd232cc29985037a13d254", > "stdout: 8b1197d0c9f7241bd0f24dc6cbe7db259d04df8044c4c71a8e43aada350dfb21", > "stdout: 0c9b32e9e58f5554df71ca0b32181470982977b13b873c04e5aabc284057220d", > "stdout: 7d4ac8ac16143a56fc404385e0387843aee60d015b7aae2a86135baded114c09", > "stdout: 171cbdf0dc7c2d7611beb8d41356b61f538bb0421c34dd3dcddc0770e0911c24", > "stdout: d9e361ee3d7d11212f661d8432d0e93f52672b85d5e0c5181a966b7b51171b6d", > "stdout: 4d68dd40967b1c0835bd8d4069959e7bea7369164db2b1227009a7f6597a1555", > "stdout: ", > "stdout: ba08af615c1ccb5e1eb2efa9b30eb30896c59f048ebcd45ecbbc4fc193b3c104", > "stdout: 5d8673fd8e4dbd07e98cf8e2ee71b27cd3103124a2e9bd378340067f34547d53", > "stdout: 1b7d6ba776e697374c1c92ff1c0fc0605325ec910fb4405b3ed3bd481f53e17e", > "stdout: efe1edc60b23366f1f603c4df1dbda29e0e1f72ff08a64688473207cdbed8e39", > "stdout: 16afa6c498f1b820adc9e9587b81aa467ffc123802a0617d64165719e44acd31", > "stdout: 5d81cc7749952d229ba471e4dc94e55940282689f2e59d26032345e7c4382d08", > "stdout: 25e2b5583b02d15692857a87f502ca07484ada3135eaaa201440ac43932cf121", > "stdout: a8c33637cf64201f3d85d64394e43a0edddfafc8463dbf28eba2d8a00bceeb5a", > "stdout: 9d87920c7755d31a1a4ecad69496237b6db50e6d0c838728345c993e430f8317" > ] >} >2018-06-25 06:21:33,708 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "333aa6b2b383: Already exists", > "90108de13a14: Pulling fs layer", > "90108de13a14: Verifying Checksum", > "90108de13a14: Download complete", > "90108de13a14: Pull complete", > "Digest: sha256:e645155266de12baafedb66bc71148fb800414967c09c7b078c289ff61b17fb3", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent", > "ea1d509b6f44: Already exists", > "6f5e633fcce0: Pulling fs layer", > "6f5e633fcce0: Verifying Checksum", > "6f5e633fcce0: Download complete", > "6f5e633fcce0: Pull complete", > "Digest: sha256:d402d9bde0a474496dcf1d33bb766f7a1cffafda7f30b4bd8560817d018504b7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", > "stdout: be11f54708db4f1c8838ed519ef3b7268c17a8163efde54c61970b1157baec6e", > "stdout: 695a4dd175471b7e6928e6dad1406561f95a1a2c2de958314ad343d2cdd57959", > "stdout: fb53716af1aef13fe6cf6dbaf8a3791e544a349dcdec4c78d48af91d750fe24a", > "stdout: Secret 78ace352-763a-11e8-9c1d-525400166144 created", > "Secret value set", > "stdout: a5a8f7a6523cd055824a29e4a4856242d74ba0c26d3ce007f7354a8a5a058326", > "stdout: 7cb78c68002a8027ed4ac34ac006b41eaa52d0c05828bf0ec50a68a18ee75971" > ] >} >2018-06-25 06:21:33,711 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: 038174bdda2ea968b360ac62d67c7d4163038292630ba01696fe62bf0a787b7f", > "", > "stderr: " > ] >} >2018-06-25 06:21:33,742 p=25239 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks4.json exists] ******** >2018-06-25 06:21:34,222 p=25239 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:21:34,223 p=25239 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:21:34,606 p=25239 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-25 06:21:34,634 p=25239 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 4] ******************** >2018-06-25 06:21:34,666 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:34,692 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:34,704 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:34,728 p=25239 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 4] *** >2018-06-25 06:21:34,760 p=25239 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:21:34,788 p=25239 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:21:34,804 p=25239 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:21:34,810 p=25239 u=mistral | PLAY [External deployment step 5] ********************************************** >2018-06-25 06:21:34,832 p=25239 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-25 06:21:34,853 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:34,872 p=25239 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-25 06:21:34,898 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-25 06:21:34,909 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-25 06:21:34,914 p=25239 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/43d4be1d-ea29-44f3-8477-c51733dea396/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-25 06:21:34,934 p=25239 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-25 06:21:34,965 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:34,986 p=25239 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-25 06:21:35,012 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,034 p=25239 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-25 06:21:35,055 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,075 p=25239 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-25 06:21:35,097 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,116 p=25239 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-25 06:21:35,137 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,156 p=25239 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-25 06:21:35,176 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,195 p=25239 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-25 06:21:35,214 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,233 p=25239 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-25 06:21:35,251 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,271 p=25239 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-25 06:21:35,298 p=25239 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,322 p=25239 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-25 06:21:35,342 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,362 p=25239 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-25 06:21:35,382 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,402 p=25239 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-25 06:21:35,423 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,444 p=25239 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-25 06:21:35,464 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,484 p=25239 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-25 06:21:35,505 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,524 p=25239 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-25 06:21:35,543 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,563 p=25239 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-25 06:21:35,583 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,602 p=25239 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-25 06:21:35,625 p=25239 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,634 p=25239 u=mistral | PLAY [Overcloud deploy step tasks for 5] *************************************** >2018-06-25 06:21:35,660 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:21:35,694 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,721 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,734 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,758 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:21:35,791 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,817 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,830 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,853 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:21:35,885 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,916 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,929 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:35,997 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:21:36,056 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,090 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,105 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,132 p=25239 u=mistral | TASK [include_role] ************************************************************ >2018-06-25 06:21:36,166 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,193 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,211 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,218 p=25239 u=mistral | PLAY [Overcloud common deploy step tasks 5] ************************************ >2018-06-25 06:21:36,250 p=25239 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-25 06:21:36,288 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,318 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,336 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,368 p=25239 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-25 06:21:36,405 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,441 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,464 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,498 p=25239 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-25 06:21:36,533 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,565 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,582 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,611 p=25239 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-25 06:21:36,645 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,675 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,689 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,715 p=25239 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-25 06:21:36,755 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,780 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,797 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,818 p=25239 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-25 06:21:36,849 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,878 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,895 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:36,918 p=25239 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-25 06:21:36,995 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,000 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,008 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,009 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,017 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': 'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,019 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': 'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,025 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,053 p=25239 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-25 06:21:37,123 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,124 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,125 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,136 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,137 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,137 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,146 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,147 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,147 p=25239 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,148 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,149 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,149 p=25239 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,153 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,161 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,174 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,174 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,175 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,180 p=25239 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,204 p=25239 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-25 06:21:37,236 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,261 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,275 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:37,298 p=25239 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-25 06:21:37,328 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,356 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,372 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,396 p=25239 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-25 06:21:37,497 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,498 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,499 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,507 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB', u'DB_ROOT_PASSWORD=ufdBL6tH5c'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=eT4ymnWN2YlqROumSbpNpoGCB", "DB_ROOT_PASSWORD=ufdBL6tH5c"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=eK5rGtu1BhrxK9TvrK0l"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,535 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'fLWtJZCynkwHz2bnZopp1aRC2'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "fLWtJZCynkwHz2bnZopp1aRC2"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,550 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,567 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,568 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,568 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,570 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,573 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,575 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,576 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,579 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '78ace352-763a-11e8-9c1d-525400166144' --base64 'AQClJS1bAAAAABAAdzMAn8GjNnkp0Gh5bS8IMw=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,586 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529919702'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529919702"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,611 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,626 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,688 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,720 p=25239 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-25 06:21:37,753 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,779 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,791 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,817 p=25239 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-25 06:21:37,893 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,906 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,907 p=25239 u=mistral | skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,913 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,914 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,915 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,917 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,926 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:37,928 p=25239 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,039 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,040 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,041 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,047 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,050 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,055 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,063 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,074 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,075 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,081 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,085 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,090 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,096 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,101 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,108 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,112 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,118 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,123 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,127 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,131 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,135 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,140 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,144 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,148 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,152 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,157 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,161 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,165 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,170 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,175 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,179 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,184 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,189 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,195 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,198 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,202 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,207 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,220 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,221 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,222 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,226 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,231 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,235 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,240 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,251 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,253 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,255 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,259 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,262 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,267 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,271 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,275 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,281 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,285 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,290 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,295 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,299 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,304 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,309 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,312 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,320 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,325 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,327 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,332 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,338 p=25239 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,384 p=25239 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-25 06:21:38,396 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:21:38,427 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:21:38,453 p=25239 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-25 06:21:38,481 p=25239 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-25 06:21:38,541 p=25239 u=mistral | skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,585 p=25239 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-25 06:21:38,615 p=25239 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,640 p=25239 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,660 p=25239 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-25 06:21:38,689 p=25239 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-25 06:21:39,502 p=25239 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "039e0b234f00fbd1242930f0d5dc67e8b4c067fe", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "868a394a237b10c579b0c7ac25057be6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529922098.79-223031554970629/source", "state": "file", "uid": 0} >2018-06-25 06:21:39,508 p=25239 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "039e0b234f00fbd1242930f0d5dc67e8b4c067fe", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "868a394a237b10c579b0c7ac25057be6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529922098.77-59975942771394/source", "state": "file", "uid": 0} >2018-06-25 06:21:39,555 p=25239 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "039e0b234f00fbd1242930f0d5dc67e8b4c067fe", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "868a394a237b10c579b0c7ac25057be6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529922098.73-258054126385780/source", "state": "file", "uid": 0} >2018-06-25 06:21:39,580 p=25239 u=mistral | TASK [Run puppet host configuration for step 5] ******************************** >2018-06-25 06:21:50,348 p=25239 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:21:51,014 p=25239 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:21:58,151 p=25239 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-25 06:21:58,175 p=25239 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 5] *** >2018-06-25 06:21:58,239 p=25239 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.30 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller5]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 5.22 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Changed: 2", > " Out of sync: 2", > " Total: 226", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " File line: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Augeas: 0.02", > " Firewall: 0.02", > " Service: 0.21", > " Package: 0.44", > " Pcmk resource default: 0.46", > " Pcmk property: 0.46", > " Exec: 1.10", > " File: 1.17", > " Last run: 1529922117", > " Config retrieval: 4.79", > " Total: 8.69", > "Version:", > " Config: 1529922107", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 39]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:21:58,270 p=25239 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.18 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute5]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.54 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 150", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.13", > " Service: 0.16", > " Package: 0.28", > " Exec: 0.29", > " Last run: 1529922110", > " Config retrieval: 2.58", > " Total: 3.48", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1529922106", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 37]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:21:58,290 p=25239 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 2.60 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage5]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.67 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 144", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Concat fragment: 0.00", > " Filebucket: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Firewall: 0.01", > " Sysctl: 0.01", > " Augeas: 0.02", > " File: 0.07", > " Service: 0.16", > " Exec: 0.27", > " Package: 0.29", > " Last run: 1529922110", > " Config retrieval: 3.09", > " Total: 3.93", > "Version:", > " Config: 1529922106", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 37]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-25 06:21:58,317 p=25239 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 5] ***************** >2018-06-25 06:21:58,348 p=25239 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:58,378 p=25239 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:58,392 p=25239 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:58,421 p=25239 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 5] *** >2018-06-25 06:21:58,458 p=25239 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:21:58,492 p=25239 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:21:58,512 p=25239 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-25 06:21:58,541 p=25239 u=mistral | TASK [Start containers for step 5] ********************************************* >2018-06-25 06:21:59,242 p=25239 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-25 06:21:59,294 p=25239 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1594176
:
1453682
|
1453734
| 1454318